Re: [openstack-dev] [nova] About live-resize spec

2017-09-20 Thread Chen CH Ji
ok, thanks, I will pick up and get Claudiu's help as well, the original
spec is abandoned ,could you please help to restore it?

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann 
To: openstack-dev@lists.openstack.org
Date:   09/20/2017 09:11 PM
Subject:Re: [openstack-dev] [nova] About live-resize spec



On 9/20/2017 12:16 AM, Chen CH Ji wrote:
> spec [1] has been there since 2014 and some patches proposed but
> abandoned after that, can someone
> please provide some info/background about why it's postponed or due to
> some limitations that nova hasn't been implemented yet?
>
> some operators suggested that this is a valuable funcationality so
> better to have it in the near feature... thanks
>
>
> [1]:
https://urldefense.proofpoint.com/v2/url?u=https-3A__blueprints.launchpad.net_nova_-2Bspec_instance-2Dlive-2Dresize=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=8URBCAs-AokrPQobkaJ801kitvJThbGRR-TJ4o-LcIE=_CAt9-g8ZEY5LXXo10rhhd-GMWz4B1YBQ28dhuFZnj0=

>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> District, Beijing 100193, PRC
>
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=8URBCAs-AokrPQobkaJ801kitvJThbGRR-TJ4o-LcIE=anotHeOxKU8HE2ff2CnYn4rwT48cG1wkINchYamlckw=

>

We talked about this during the newton midcycle and from what I remember
we wanted to make this depend on having the ability for users to know
what they are capable of doing with their server instance in any given
cloud. This has grown into the cross project capabilities API
discussions that happen with the API work group.

At this point, I don't think we have anyone working on doing anything
for a capabilities API in nova, nor do we have cross project agreement
on a perfect solution that will work for all projects. At the PTG in
Denver I think we just said we care less about having a perfect
guideline for all projects to have a consistent API, and more about
actually documenting the APIs that each project does have, which we do a
pretty good job of in Nova.

So I think live resize would be fine to pick up again if you're just
resizing CPU/RAM from the flavor and if we provide a policy rule to
disable it in clouds that don't want to expose that feature.

Cloudbase was originally driving it for Hyper-v so you might want to
talk with Claudiu Belu.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=8URBCAs-AokrPQobkaJ801kitvJThbGRR-TJ4o-LcIE=anotHeOxKU8HE2ff2CnYn4rwT48cG1wkINchYamlckw=




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Chen CH Ji
yes, I didn't go to that detail and missed the delete flag , that's exactly
what I am looking for, which is a little bit confusing...
Thanks for the info for know this...

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann 
To: openstack-dev@lists.openstack.org
Date:   09/20/2017 09:15 PM
Subject:Re: [openstack-dev] [nova][api] why need
PUT /servers/{server_id}/metadata/{key} ?



On 9/20/2017 12:48 AM, Chen CH Ji wrote:
> in analyzing other code, found seems we don't need PUT
> /servers/{server_id}/metadata/{key} ?
>
> as the id is only used for check whether it's in the body and we will
> honor the whole body (body['meta'] in the code)
>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_api_openstack_compute_server-5Fmetadata.py-23L80=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=DSFbFb2bqll3hC8yrttkW6teiZtFod4XBQIC8jauVlE=1S1y1O0K2KOZgQd1wmx5_P8IhzNqf-i6d4IThx0yLrI=

>
> looks like it's identical to
> PUT /servers/{server_id}/metadata
>
> why we need this API or it should be something like
>
> PUT /servers/{server_id}/metadata/{key}but we only accept a value to
> modify the meta given by {key} in the API side?
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
> District, Beijing 100193, PRC
>
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=DSFbFb2bqll3hC8yrttkW6teiZtFod4XBQIC8jauVlE=EMTJozBhxl7wXNyB7emtzxXMkegVXKWV6Ko8E2uhsPs=

>

This API is a bit confusing, and the code is too since it all goes down
to some common code, and I think you're missing the 'delete' flag:

https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_5bf1bb47c7e17c26592a699d07c2faa59d98bfb8_nova_compute_api.py-23L3830=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=DSFbFb2bqll3hC8yrttkW6teiZtFod4XBQIC8jauVlE=AMY4S8Ux0G78V_Nu2H17kNivICkiKErqDzPj0eFsUgg=


If delete=False, as it is in this case, we only add/update the existing
metadata with the new metadata from the request body. If delete=True,
then we overwrite the instance metadata with whatever is in the request.

Does that answer your question?

This API is problematic and we have bugs against it since it's not
atomic, i.e. two concurrent requests will overwrite one of them. We
should really have a generation ID or etag on this data to be sure it's
atomically updated.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0=DSFbFb2bqll3hC8yrttkW6teiZtFod4XBQIC8jauVlE=EMTJozBhxl7wXNyB7emtzxXMkegVXKWV6Ko8E2uhsPs=




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread Clint Byrum
Excerpts from Michael Still's message of 2017-09-20 10:25:17 -0600:
> Dims, I'm not sure that's actually possible though. Many of these files
> have been through rewrites and developed over a large number of years.
> Listing all authors isn't practical.
> 
> Given the horse has bolted on forking these files, I feel like a comment
> acknowledging the original source file is probably sufficient.
> 
> What is concerning to me is that some of these files are part of the "ABI"
> of nova, and if mogan diverges from that then I think we're going to see
> user complaints in the future. Specifically configdrive, and metadata seem
> like examples of this. I don't want to see us end up in another "managed
> cut and paste" like early oslo where nova continues to develop these and
> mogan doesn't notice the changes.
> 
> I'm not sure how we resolve that. One option would be to refactor these
> files into a shared library.
> 

Agreed 100%. It would be better to have something completely different
than something that works 97% the same but constantly skews.

Luckily, since these things are part of the ABI of Nova, they are
versioned in many cases, and in all have a well defined interfaces on
one side. Seems like it should be relatively straight forward to wrap
the other side of them and call it a library.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday Sept 21st at 8:00 UTC

2017-09-20 Thread Ghanshyam Mann
Hello everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, Sept 21st at 8:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
 
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_September_21st_2017_.280800_UTC.29

Anyone is welcome to add an item to the agenda.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] This week drivers meeting

2017-09-20 Thread Miguel Lavalle
Hi Neutrinos,

We will cancel this week's drivers meeting. We will resume normally next
week, on September 28th

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Unable to add member to Zun core team

2017-09-20 Thread Hongbin Lu
Hi Infra team,

I tried to add Kien Nguyen kie...@vn.fujitsu.com 
to the Zun's core team [1], but gerrit prevented me to do that. Attached file 
showed the error. Could anyone provide suggestion for this?

Best regards,
Hongbin

[1] https://review.openstack.org/#/admin/groups/1382,members
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mogan]Tasks update of Mogan Project

2017-09-20 Thread hao wang
Hi,

We are glad to present this week's tasks update of Mogan.

See the details below:


Essential Priorities
==

1. Adopt servers (wanghao, litao)

blueprint: https://blueprints.launchpad.net/mogan/+spec/manage-existing-bms

spec: https://review.openstack.org/#/c/459967/ merged

code: https://review.openstack.org/#/c/479660/ merged

  https://review.openstack.org/#/c/481544/ merged

2. Add root disk partitions support (zhenguo)

https://review.openstack.org/#/c/499039/ merged

Move to Queens
=

1. Valence integration (zhenguo, shaohe, luyao, Xinran)  Move to next cycle.

blueprint: https://blueprints.launchpad.net/mogan/+spec/valence-integration

spec: 
https://review.openstack.org/#/c/441790/3/specs/pike/approved/valence-integration.rst

No updates


2. Support boot-from-volume in Mogan(wanghao, zhenguo) Move to next cycle

blueprint: 
https://blueprints.launchpad.net/mogan/+spec/support-boot-from-volume-in-mogan

code: https://review.openstack.org/#/c/489455/


Optional Priorities
==
1. Documentation (zhenguo, liusheng)

Add states and transitons diagram  https://review.openstack.org/471293

Add sample config and policy files to mogan docs
https://review.openstack.org/471637

add documentation about testing https://review.openstack.org/#/c/472028


BTW,  we will hold our first PTG online this Friday, to decide what we
want to do in Queens release.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Blazar] PTG recap

2017-09-20 Thread Masahito MUROI


Hi all,

This is a summary of Queens PTG discussion for Blazar. We had lots of 
items in the meeting. So I write down main topics in the mail. If you're 
interested in all items, please see the queens PTG etherpad page[1].


Priorities for Queens cycle
---

We discussed our team priorities for Queens cycle. The main themes for 
the release are resiliency and manageability.  For resiliency, Blazar 
will support a recovering feature from reserved resource failure and 
atomic API transactions. For manageability, Blazar will follow some 
community wide goals and will be refactored about its plugins.


If you'd like to see more details, please go to the etherpad page[2].

API for querying resource usage/availability
--

This is an usability improvement feature. It enables users to query how 
many/how much/how long they can reserve a specific resource/time. 
Blazar supports APIs which provide reservations CRUD operation for users 
now. So users don't have a way to check how many resources are available 
at a specific time window. By the API, users can query how many 
resources are available at a specific time.



Resource Monitoring
---

This is a kind of resource failure recovery mechanism in Blazar. Blazar 
doesn't react reserved resource failures now. For instance, if a 
hypervisor which is reserved by an user for future usage goes down, 
Blazar doesn't re-assign a new hypervisor for the user's reservation. 
By the resource monitoring feature, Blazar will manage reservations well 
against unexpected failures.



Preemptible instance


This topic is one of discussions in Nova team[3] and Blazar team was 
involved in the discussion.  Blazar is one of possibilities for "reaper" 
service in the topic since Blazar has a time base reaping feature in 
instance reservation feature of Blazar.  So Blazar team has interest to 
support the feature.



1. https://etherpad.openstack.org/p/blazar-ptg-queens
2. https://launchpad.net/blazar/+milestone/0.4.0
3. 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122258.html



best regards,
Masahito


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] Extending Topology

2017-09-20 Thread Muhammad Usman
Dear Ifat,

Usman is here. Previously, I contacted you for contributing to OpenStack
Vitrage project but could not  follow up with you for sometime due to
various reasons.

However, to get actively involved in OpenStack project, I have decided to
join OpenStack Summit in Sydney.

Also, based on my previous experience withe Vitrage project that is already
in shape, so its not easy to propose new ideas.

Therefore, a better way to start contributing to development side with
already proposed and on-going development.

-- 

*Regards*

*Muhammad Usman*
Application Engineer
LMK Resources (pvt) Limited
www.lmkr.com
+92 (323) 5599 068

On Mon, Mar 27, 2017 at 7:32 PM, Afek, Ifat (Nokia - IL/Kfar Sava) <
ifat.a...@nokia.com> wrote:

> Hi,
>
> Let me try and explain the more general use case.
>
> You can query OVS for the switches information, and understand how they
> are mapped to one another. This is not enough for knowing the exact route
> of the network traffic for a certain VM.
>
> A certain switch can be connected to more than one other switch. You can,
> as you said, query the network type (encapsulation) information from
> Neutron. But then you will need in addition to query the rules of the
> specific switch from OVS, in order to know which route to take for each
> encapsulation type.
>
> Another problematic use case is when the switches are not connected to
> each other. The traffic can be redirected by a network-stack software
> component, so you will have to query it in addition in order to tell the
> route.
>
> And on top of all this, we need to think how to best represent this
> information in Vitrage (i.e. how to draw the graph, which vertices to
> connect to one another, etc.).
>
> IMO, this is all feasible and will give a lot of value to Vitrage. Just
> not easy to implement.
>
> Best Regards,
> Ifat.
>
>
> On 22/03/2017, 08:50, "Muhammad Usman"  wrote:
>
> Hello Ifat,
>
> I tried to see more in depth about the issues you mentioned regarding
> the extension of vSwitches. Due to a lot of complexity involved in
> generating this topology and associated effects, I believe we need to
> setup some baseline (e.g. adding a configuration file for specifying
> bridges in existing deployment setup). Then using that baseline,
> topology can be constructed as well as type of network can be
> extracted from neutron and associated path followed (e.g. vlan or
> vxlan). However, more general case you mentioned, I cannot get it. Do
> you mean nova-network?
>
> Regarding the sunburst representation -  Yes I agree, if you want to
> keep compute hierarchy separate then addition of networking components
> is not a good idea.
>
> Also, suggestions from other vitrage members are welcomed.
>
>
> > On Thu, Mar 16, 2017 at 6:44 PM, Afek, Ifat (Nokia - IL) <
> > ifat.a...@nokia.com > wrote:
> >
> >> Hi,
> >>
> >>
> >>
> >> Adding switches to the Vitrage topology is generally a good idea,
> but the
> >> implementation might be very complex. Your diagram shows a simple
> use
> > case,
> >> where all switches are linked to one another and it is easy to
> determine
> >> the effect of a failure on the vms. However, in the more general
> case
> > there
> >> might be switches without a connecting port (e.g. they might be
> connected
> >> via the network stack). In such cases, it is not clear how to model
> the
> >> switches topology in Vitrage. Another case to consider is when the
> >> network
> >> type affects the packets path, like vlan vs. vxlan. If you have an
> idea
> >> of
> >> how to solve these issues, I will be happy to hear it.
> >>
> >>
> >>
> >> Regarding the sunburst representation – I’m not sure I understand
> your
> >> diagram. Currently the sunburst is meant to show (only) the compute
> >> hierarchy: zones, hosts and instances. It is arranged in a
> containment
> >> relationship, i.e. every instance on the outer ring appears next to
> its
> >> host in the inner ring. If you add the bridges in the middle, you
> lose
> > this
> >> containment relationship. Can you please explain to me the suggested
> >> diagram?
> >>
> >>
> >>
> >> BTW, you can send such questions to OpenStack mailing list (
> >> openstack-dev@lists.openstack.org ) with [vitrage]
> tag in
> > the title, and
> >> possibly get replies from other contributors as well.
> >>
> >>
> >>
> >> Best Regards,
> >>
> >> Ifat.
> >>
> >>
> >>
> >>
> >>
> >> *From: *Muhammad Usman >
> >> *Date: *Monday, 13 March 2017 at 09:16
> >>
> >> *To: *"Afek, Ifat (Nokia - IL)"  >
> >> *Cc: *JungSu Han >
> >> *Subject: *Re: OpenStack Vitrage
> >>
> >>
>  

[openstack-dev] [neutron] team ptg photos

2017-09-20 Thread Kevin Benton
https://photos.app.goo.gl/Aqa51E2aVkv5b4ah1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-20 Thread Clint Byrum
Wading in a bit late as I've been off-list for a while, but I have thoughts 
here.

Excerpts from Jay Pipes's message of 2017-09-13 13:44:55 -0400:
> On 09/12/2017 06:53 PM, Boris Pavlovic wrote:
> > Mike,
> > 
> > Great intiative, unfortunately I wasn't able to attend it, however I 
> > have some thoughts...
> > You can't simplify OpenStack just by fixing few issues that are 
> > described in the etherpad mostly..
> > 
> > TC should work on shrinking the OpenStack use cases and moving towards 
> > the product (box) complete solution instead of pieces of bunch barely 
> > related things..
> 
> OpenStack is not a product. It's a collection of projects that represent 
> a toolkit for various cloud-computing functionality.
>

I think Boris was suggesting that making it a product would simplify it.

I believe there is some effort under way to try this, but my brain
has ceased to remember what that effort is called or how it is being
implemented. Something about common use cases and the exact mix of
projects + configuration to get there, and testing it? Help?

> > *Simple things to improve: *
> > /This is going to allow community to work together, and actually get 
> > feedback in standard way, and incrementally improve quality. /
> > 
> > 1) There should be one and only one:
> > 1.1) deployment/packaging(may be docker) upgrade mechanism used by 
> > everybody
> 
> Good luck with that :) The likelihood of the deployer/packager community 
> agreeing on a single solution is zero.
> 

I think Boris is suggesting that the OpenStack development community
pick one to use, not the packaging and deployer community. The
only common thing dev has in this area is devstack, and that
has allowed dev to largely ignore issues they create because
they're not feeling the pain of the average user who is using
puppet/chef/ansible/tripleo/kolla/in-house-magic to deploy.

> > 1.2) monitoring/logging/tracing mechanism used by everybody
> 
> Also close to zero chance of agreeing on a single solution. Better to 
> focus instead on ensuring various service projects are monitorable and 
> transparent.
> 

I'm less enthused about this one as well. Monitoring, alerting, defining
business rules for what is broken and what isn't are very org-specific
things.

I also don't think OpenStack fails at this and there is plenty exposed
in clear ways for monitors to be created.

> > 1.3) way to configure all services (e.g. k8 etcd way)
> 
> Are you referring to the way to configure k8s services or the way to 
> configure/setup an *application* that is running on k8s? If the former, 
> then there is *not* a single way of configuring k8s services. If the 
> latter, there isn't a single way of configuring that either. In fact, 
> despite Helm being a popular new entrant to the k8s application package 
> format discussion, k8s itself is decidedly *not* opinionated about how 
> an application is configured. Use a CMDB, use Helm, use env variables, 
> use confd, use whatever. k8s doesn't care.
> 

We do have one way to configure things. Well.. two.

*) Startup-time things are configured in config files.
*) Run-time changable things are in databases fronted by admin APIs/tools.

> > 2) Projects must have standardize interface that allows these projects 
> > to use them in same way.
> 
> Give examples of services that communicate over *non-standard* 
> interfaces. I don't know of any.
> 

Agreed here too. I'd like to see a more clear separation between nova,
neutron, and cinder on the hypervisor, but the way they're coupled now
*is* standardized.

> > 3) Testing & R should be performed only against this standard deployment
> 
> Sorry, this is laughable. There will never be a standard deployment 
> because there are infinite use cases that infrastructure supports. 
> *Your* definition of what works for GoDaddy is decidedly different from 
> someone else's definition of what works for them.
> 

If there were a few well defined product definitions, there could be. It's
not laughable at all to me. devstack and the configs it creates are useful
for lightweight testing, but they're not necessarily representative of
the standard makeup of real-world clouds.

> > *Hard things to improve: *
> > 
> > OpenStack projects were split in far from ideal way, which leads to 
> > bunch of gaps that we have now:
> > 1.1) Code & functional duplications:  Quotas, Schedulers, Reservations, 
> > Health checks, Loggign, Tracing, 
> 
> There is certainly code duplication in some areas, yes.
> 

I feel like this de-duplication has been moving at the slow-but-consistent
pace anyone can hope for since it was noticed and oslo was created.

It's now at the things that are really hard to de-dupe like quotas and policy.

> > 1.2) Non optimal workflows (booting VM takes 400 DB requests) because 
> > data is stored in Cinder,Nova,Neutron
> 
> Sorry, I call bullshit on this. It does not take 400 DB requests to boot 
> a VM. Also: the DB is not at all the bottleneck in the VM launch 

Re: [openstack-dev] [all] review.openstack.org downtime and Gerrit upgrade TODAY 15:00 UTC - 23:59 UTC

2017-09-20 Thread Clark Boylan
On Mon, Sep 18, 2017, at 04:58 PM, Clark Boylan wrote:
> On Mon, Sep 18, 2017, at 06:43 AM, Andreas Jaeger wrote:
> > Just a friendly reminder that the upgrade will happen TODAY, Monday
> > 18th, starting at 15:00 UTC. The infra team expects that it takes 8
> > hours, so until 2359 UTC.
> 
> This work was functionally completed at 23:43 UTC. We are now running
> Gerrit 2.13.9. There are some cleanup steps that need to be performed in
> Infra land, mostly to get puppet running properly again.
> 
> You will also notice that newer Gerrit behaves in some new and exciting
> ways. Most of these should be improvements like not needing to reapprove
> changes that already have a +1 Workflow but also have a +1 Verified;
> recheck should now work for these cases. If you find a new behavior that
> looks like a bug please let us know, but we should also work to file
> them upstream so that newer Gerrit can address them.
> 
> Feel free to ask us questions if anything else comes up.
> 
> Thank you to everyone that helped with the upgrade. Seems like these get
> more and more difficult with each Gerrit release so all the help is
> greatly appreciated.

As a followup we have been tracking new fun issues/behaviors in Gerrit
and fixing them over the last couple days. Here is an update on where we
are currently at.

Gerrit emails are slow. You may have noticed that you aren't getting
quite as much Gerrit email as before. This is because Gerrit is only
sending about one email a minute. Upstream bug is at
https://bugs.chromium.org/p/gerrit/issues/detail?id=7261 and we have
just got https://review.openstack.org/#/c/505677 merged based on the
info in that upstream bug. This won't be applied until we get puppet
running on review.openstack.org again (more on that later) and will
require another Gerrit service restart.

The Gerrit web UI's file editor behaves oddly resulting in what appear
to be API timeouts. This also seems to affect gertty. I don't think
anyone has dug in far enough to understand what is going on yet.

Now for known issues that should be fixed.

The Gerrit dashboard creator was using queries that didn't work with new
Gerrit query behavior. Sdague got this sorted out quick.

The Gerrit event stream changed its ref-updated data and now includes
refs/heads/$branchname instead of just $branchname under refName when
changes merge. This confused Zuul and meant no post jobs were running.
Zuul has been updated to handle this new behavior and post jobs are
running.

There were no gitweb links. This wasn't caught in testing because we
used a test cgit setup on review-dev. Fix here was just to switch to
using cgit on review.openstack.org (though the link is still called
"gitweb" in the Gerrit UI for reasons).

Memory consumption has gone up which initially led to frequent garbage
collection which led to 500 errors. We bumped heap memory available to
Gerrit up to 48GB (from 30GB) and that seems to have stabilized things.
Thankfully while needing more memory it doesn't seem to continuously
grow like it did on the old version (which forced us to do semi frequent
service restarts). We will have to monitor Gerrit to ensure it is
properly stable over time.

We could not create new projects in Gerrit. This is because Gerrit 2.12
dropped the --name argument from the create-project command which
Gerritlib was using. We have updated Gerritlib to check the Gerrit
version and pass the correct arguments to create-project.

Unfortunately, we still can't create new projects just yet, this is
related to puppet not running on review.openstack.org right now. The
gerrit server itself is fine and would puppet except that we force
puppet to run on our git mirror farm first to ensure proper mirroring of
repos and those have been failing since the CentOS 7.4 release. Once
we've got puppet happy we can get back to creating new projects in
Gerrit.

All the details can be found at
https://etherpad.openstack.org/p/gerrit-2.13-issues.

Thank you for your patience,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] question about OpenStack Python glance API -- list share image by tenant.

2017-09-20 Thread zhihao wang
Deal all


I have a question about the openstack python glance client api


I want to get the list of all the shared image by tenant , but don't know how 
to get the list..


I can only find out this, list sharing,  but when i use tenant, it did not 
work, showed some error "No tenant_id" attribute...

List 
Sharings¶

Describe sharing permissions by image or tenant:

glance.image_members.list(image_id)



Just like below in horizon, I want to get the share image with the tenant,..

Can anyone know how to get it using python api?

[cid:3f13f092-2295-4710-918f-8698b541475a]


Please help


Thanks

Wally
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2017-09-20 17:18:51 -0400:
> On Wed, Sep 20, 2017 at 04:51:21PM -0400, Doug Hellmann wrote:
> > Excerpts from Jeremy Stanley's message of 2017-09-20 18:59:50 +:
> > > On 2017-09-20 14:46:32 -0400 (-0400), Tony Breeds wrote:
> > > [...]
> > > > I'd like to find a solution that doesn't need a tox_install.sh
> > > [...]
> > > 
> > > This wart came up when discussing Zuul v3 job translations... what
> > > would be ideal is if someone has the bandwidth to work upstream in
> > > tox to add a constraints file option agreeable to its maintainer so
> > > that we don't have to override install-command to add it in all our
> > > configs.
> > 
> > One of the reasons we need a script is to modify the constraint list to
> > remove the current library from the list if it is present. I'm not sure
> > that special logic would be something the tox maintainers would want.
> 
> My proposal is that if 'constraints' is enabled then it's only used for
> the deps install.  Since that should get everything that $project needs
> if it's omitted during that phase it wont hurt us *and* removes the
> needs for that aspect of the tox_install.sh
> 
> In my head it looks like:
> ---
> [testenv]
> usedevelop = True
> # constraints = 
> {env:UPPER_CONSTRAINTS_FILE:https://tarballs.openstack.org/constraints/master.txt}
> constraints = 
> {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
> install_command = pip install {constraints} {opts} {packages}
> deps = -r{toxinidir}/requirements.txt
>-r{toxinidir}/test-requirements.txt
> ---
> 
> It wont remove tox_install.sh but it will reduce the number of projects
> that needs it.  For example oslso.* and os-* shouldn't need a
> tox_install.sh it we can make the above happen.
> 
> 
> Yours Tony.

I like the idea. I'm not sure why, if the constraints file is only used
for the dependency installation step, we still need tox_install.sh? If
that's just to avoid updating the URL when we create branches, I can
live with continuing to do that step if we figure out some other way to
minimize the open race window.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread John Dickinson


On 20 Sep 2017, at 9:25, Michael Still wrote:

> Dims, I'm not sure that's actually possible though. Many of these files
> have been through rewrites and developed over a large number of years.
> Listing all authors isn't practical.
>
> Given the horse has bolted on forking these files, I feel like a comment
> acknowledging the original source file is probably sufficient.

In Swift's repo, we acknowledge the original authors in a section of the 
AUTHORS file

https://github.com/openstack/swift/blob/master/AUTHORS

--John



>
> What is concerning to me is that some of these files are part of the "ABI"
> of nova, and if mogan diverges from that then I think we're going to see
> user complaints in the future. Specifically configdrive, and metadata seem
> like examples of this. I don't want to see us end up in another "managed
> cut and paste" like early oslo where nova continues to develop these and
> mogan doesn't notice the changes.
>
> I'm not sure how we resolve that. One option would be to refactor these
> files into a shared library.
>
> Michael
>
>
>
>
> On Wed, Sep 20, 2017 at 5:51 AM, Davanum Srinivas  wrote:
>
>> Zhenguo,
>>
>> Thanks for bringing this up.
>>
>> For #1, yes please indicate which file from Nova, so if anyone wanted
>> to cross check for fixes etc can go look in Nova
>> For #2, When you pick up a commit from Nova, please make sure the
>> commit message in Mogan has the following
>>* The gerrit change id(s) of the original commit, so folks can
>> easily go find the original commit in gerritt
>>* Add "Co-Authored-By:" tags for each author in the original commit
>> so they get credit
>>
>> Also, Please make sure you do not alter any copyright or license
>> related information in the header when you first copy a file from
>> another project.
>>
>> Thanks,
>> Dims
>>
>> On Wed, Sep 20, 2017 at 4:20 AM, Zhenguo Niu 
>> wrote:
>>> Hi all,
>>>
>>> I'm from Mogan team, we copied some codes/frameworks from Nova since we
>> want
>>> to be a Nova with a bare metal specific API.
>>> About why reinventing the wheel, you can find more informations here [1].
>>>
>>> I would like to know what's the decent way to show our respect to the
>>> original authors we copied from.
>>>
>>> After discussing with the team, we plan to do some improvements as below:
>>>
>>> 1. Adds some comments to the beginning of such files to indicate that
>> they
>>> leveraged the implementation of Nova.
>>>
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> baremetal/ironic/driver.py#L19
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> console/websocketproxy.py#L17-L18
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> consoleauth/manager.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> engine/configdrive.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> engine/metadata.py#L18
>>> https://github.com/openstack/mogan/blob/master/mogan/network/api.py#L18
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/aggregate.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/keypair.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/server_fault.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/server_group.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> scheduler/client/report.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> scheduler/filter_scheduler.py#L17
>>>
>>> 2. For the changes we follows what nova changed, should reference to the
>>> original authors in the commit messages.
>>>
>>>
>>> Please let me know if there are something else we need to do or there are
>>> already some existing principles we can follow, thanks!
>>>
>>>
>>>
>>> [1] https://wiki.openstack.org/wiki/Mogan
>>>
>>>
>>> --
>>> Best Regards,
>>> Zhenguo Niu
>>>
>>> 
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature

Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 04:51:21PM -0400, Doug Hellmann wrote:
> Excerpts from Jeremy Stanley's message of 2017-09-20 18:59:50 +:
> > On 2017-09-20 14:46:32 -0400 (-0400), Tony Breeds wrote:
> > [...]
> > > I'd like to find a solution that doesn't need a tox_install.sh
> > [...]
> > 
> > This wart came up when discussing Zuul v3 job translations... what
> > would be ideal is if someone has the bandwidth to work upstream in
> > tox to add a constraints file option agreeable to its maintainer so
> > that we don't have to override install-command to add it in all our
> > configs.
> 
> One of the reasons we need a script is to modify the constraint list to
> remove the current library from the list if it is present. I'm not sure
> that special logic would be something the tox maintainers would want.

My proposal is that if 'constraints' is enabled then it's only used for
the deps install.  Since that should get everything that $project needs
if it's omitted during that phase it wont hurt us *and* removes the
needs for that aspect of the tox_install.sh

In my head it looks like:
---
[testenv]
usedevelop = True
# constraints = 
{env:UPPER_CONSTRAINTS_FILE:https://tarballs.openstack.org/constraints/master.txt}
constraints = 
{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
install_command = pip install {constraints} {opts} {packages}
deps = -r{toxinidir}/requirements.txt
   -r{toxinidir}/test-requirements.txt
---

It wont remove tox_install.sh but it will reduce the number of projects
that needs it.  For example oslso.* and os-* shouldn't need a
tox_install.sh it we can make the above happen.


Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-09-20 18:59:50 +:
> On 2017-09-20 14:46:32 -0400 (-0400), Tony Breeds wrote:
> [...]
> > I'd like to find a solution that doesn't need a tox_install.sh
> [...]
> 
> This wart came up when discussing Zuul v3 job translations... what
> would be ideal is if someone has the bandwidth to work upstream in
> tox to add a constraints file option agreeable to its maintainer so
> that we don't have to override install-command to add it in all our
> configs.

One of the reasons we need a script is to modify the constraint list to
remove the current library from the list if it is present. I'm not sure
that special logic would be something the tox maintainers would want.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][i18n][ptg] Denver PTG Summary for Docs (and i18n)

2017-09-20 Thread Petr Kovar
Hi all,

Just wanted to share a few notes and a brief summary of what happened in
Denver last week, during the Project Team Gathering, in docs and i18n
shared sessions.

We had a fairly good attendance of 5 to 15 ppl during the first two
days. A number of people primarily working on other projects showed up in
our sessions to chat about common issues and plans. On the other hand, many
of our cores couldn't attend, though I'm hoping if PTG comes closer to
Europe, the attendance will improve.

On a related note, we also welcomed new cores to the team and the Denver
PTG was an opportunity for some of us to meet face to face for the first
time.

The overall schedule for all our sessions with some comments can be found at
https://etherpad.openstack.org/p/docs-i18n-ptg-queens, there's also
https://etherpad.openstack.org/p/doc-future-problems with a more detailed
discussion of some of the items.

To summarize what I found most important:

VISION

We spent a lot of time discussing a new vision document for the docs team,
based on the updated docs team mission statement
(https://review.openstack.org/#/c/499556/). A working draft is available
here:

https://etherpad.openstack.org/p/docs-i18n-ptg-queens-mission-statement

In the coming weeks, we'll transform those notes into a final draft and
share it with the broader community for input.

The next step after this would be to update/rework
https://governance.openstack.org/tc/reference/tags/docs_follows-policy.html.

EOL DOCS AVAILABILITY

Discussed addressing numerous issues related to publishing EOL docs and our
docs retention policy, based on community feedback and many requests. The
plan is to write a retention policy docs spec, resurrect Mitaka docs and add
badges to clearly identify unsupported content, among other things.

HA GUIDE

Work will continue on the guide per the previous plans.

ARCHITECTURE GUIDE

From Pike, this guide is frozen. Patches still welcome.

OPENSTACK-MANUALS BUGS

To be moved to appropriate projects, if applicable.

CONFIG DOCS

Discussed with several teams, particularly cinder, how to generate
config tables using oslo_config.sphinxext.

REDESIGNING DOCS SITE

A couple of improvements to adjust the site content structure and
overall design was discussed, such as tweaking lists of projects, linking
from series subpages back to the top page, install vs. deployment pages,
linking to projects with no stable branches, redoing our sitemap, switching
to SCSS, improving HTML semantics, etc.

DOCS TOOLING

Doug kindly offered the team and the community to give a presentation
about updated docs tooling:

https://etherpad.openstack.org/p/doc-tool-lunch-and-learn

We'll turn this into a proper doc and share with everybody.

INSTALL GUIDES TESTING

A couple of people and teams showed interest in testing the Pike install
guides (common content + minimal deployment services). To track this
activity, we'll use this wiki page:

https://wiki.openstack.org/wiki/Documentation/PikeDocTesting

TRANSLATIONS

Discussed generating multiple PO files for docs
migrated to project repos, to make translators' lives easier.

THAT'S IT?

Please add to the list if I missed anything important, particularly for
i18n.

Thank you to everybody who attended the sessions, and in particular to Alex
who took most of the notes. I think this PTG was very productive, full of
energy, and intense!

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] MTU native ovs firewall driver

2017-09-20 Thread Ian Wells
Since OVS is doing L2 forwarding, you should be fine setting the MTU to as
high as you choose, which would probably be the segment_mtu in the config,
since that's what it defines - the largest MTU that (from the Neutron API
perspective) is usable and (from the OVS perspective) will be used in the
system.  A 1500MTU Neutron network will work fine over a 9000MTU OVS switch.

What won't work is sending a 1500MTU network to a 9000MTU router port.  So
if you're doing any L3 (where the packet arrives at an interface, rather
than travels a segment) you need to consider those MTUs in light of the
Neutron network they're attached to.
-- 
Ian.

On 20 September 2017 at 09:58, Ihar Hrachyshka  wrote:

> On Wed, Sep 20, 2017 at 9:33 AM, Ajay Kalambur (akalambu)
>  wrote:
> > So I was forced to explicitly set the MTU on br-int
> > ovs-vsctl set int br-int mtu_request=9000
> >
> >
> > Without this the tap device added to br-int would get MTU 1500
> >
> > Would this be something the ovs l2 agent can handle since it creates the
> bridge?
>
> Yes, I guess we could do that if it fixes your problem. The issue
> stems from the fact that we use a single bridge for different networks
> with different MTUs, and it does break some assumptions kernel folks
> make about a switch (that all attached ports steer traffic in the same
> l2 domain, which is not the case because of flows we set). You may
> want to report a bug against Neutron and we can then see how to handle
> that. I will probably not be as simple as setting the value to 9000
> because different networks have different MTUs, and plugging those
> mixed ports in the same bridge may trigger MTU updates on unrelated
> tap devices. We will need to test how kernel behaves then.
>
> Also, you may be interested in reviewing an old openvswitch-dev@
> thread that I once started here:
> https://mail.openvswitch.org/pipermail/ovs-dev/2016-June/316733.html
> Sadly, I never followed up with a test scenario that wouldn't involve
> OpenStack, for OVS folks to follow up on, so it never moved anywhere.
>
> Cheers,
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ansible][kayobe][kolla] Announcing the availability of kayobe 2.0.0 and 3.0.0

2017-09-20 Thread Mark Goddard
Hi,

I am pleased to announce the availability of not one, but two releases of
kayobe[1]. Both provide numerous enhancements over previous releases.
Kayobe 2.0.0 is based on the Ocata OpenStack release, and Kayobe 3.0.0 on
Pike. Please see the release notes[2] for further information.

A bit about kayobe:

Kayobe is an open source tool for automating deployment of Scientific
OpenStack onto a set of bare metal servers. Kayobe is composed of Ansible
playbooks, a python module, and makes heavy use of the OpenStack kolla
project. Kayobe aims to complement the kolla-ansible project, providing an
opinionated yet highly configurable OpenStack deployment and automation of
many operational procedures.

Please see the documentation[3] for further information.

[1] https://github.com/stackhpc/kayobe/releases
[2] http://kayobe.readthedocs.io/en/latest/release-notes.html
[3] http://kayobe.readthedocs.io/en/latest
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is there any reason to exclude originally failed build hosts during live migration?

2017-09-20 Thread Sylvain Bauza
On Wed, Sep 20, 2017 at 10:15 PM, melanie witt  wrote:

> On Wed, 20 Sep 2017 13:47:18 -0500, Matt Riedemann wrote:
>
>> Presumably there was a good reason why the instance failed to build on a
>> host originally, but that could be for any number of reasons: resource
>> claim failed during a race, configuration issues, etc. Since we don't
>> really know what originally happened, it seems reasonable to not exclude
>> originally attempted build targets since the scheduler filters should still
>> validate them during live migration (this is all assuming you're not using
>> the 'force' flag with live migration - and if you are, all bets are off).
>>
>
> Yeah, I think because an original failure to build could have been a
> failed claim during a race, config issue, or just been a very long time
> ago, we shouldn't continue to exclude those hosts forever.
>
> If people agree with doing this fix, then we also have to consider making
>> a similar fix for other move operations like cold migrate, evacuate and
>> unshelve. However, out of those other move operations, only cold migrate
>> attempts any retries. If evacuate or unshelve fail on the target host,
>> there is no retry.
>>
>
> I agree with doing that fix for all of the move operations.
>
>
Yeah, a host could be failing when we created that instance 1 year ago,
that doesn't mean the host won't be available this time.

> -melanie
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Pike 16.0.1 bug fix release proposed

2017-09-20 Thread Matt Riedemann

FYI, I've got a release proposed for Pike 16.0.1:

https://review.openstack.org/#/c/505796/

This includes several bug fixes for regressions introduced in 16.0.0.

A lot of these have to do with properly making allocations in Placement 
or cleaning up allocations from Placement during move operation failures 
like live migration and evacuate.


There are also a couple of high severity fixes related to dealing with 
instances that fail to build due to over quota errors.


This is the change log:

user@ubuntu:~/git/nova$ git log --oneline --no-merges 16.0.0..
a9f9e70 Add @targets_cell for live_migrate_instance method in conductor
a3f286f Set error state after failed evacuation
e87d9f5 Functional test for regression bug #1713783
8365eb6 Call terminate_connection when shelve_offloading
b9a1ccc Handle keypair not found from metadata server using cells
9cddde1 Target context when setting instance to ERROR when over quota
e53115d Move hash ring initialization to init_host() for ironic
21d7f8b Remove dest node allocation if evacuate MoveClaim fails
e00584f De-duplicate two delete_allocation_for_* methods
61ad705 Add a test to make sure failed evacuate cleans up dest allocation
d0375c2 Add recreate test for evacuate claim failure
2115094 Create allocations against forced dest host during evacuate
b5ea0d1 Refactor out claim_resources_on_destination into a utility
5ee7f9d Add recreate test for forced host evacuate not setting dest 
allocations

423c7bb Provide hints when nova-manage db sync fails to sync cell0
a98a52d Ensure instance mapping is updated in case of quota recheck fails
2ad865f Track which cell each instance is created in and use it consistently
72e50be Make ConductorTaskTestCase run with 2 cells
9a791df Allow setting up multiple cells in the base TestCase
7d220b3 Add release note for force live migration allocations
e069125 Fix broken link
cd82d55 Hyper-V: Perform proper cleanup after cold migration
a1462d2 Cleanup allocations on invalid dest node during live migration
fd59e9a Add functional recreate test for live migration pre-check fails
c01ca54 doc: fix show-hide sample in notification devref
cdff10e [placement] Update user doc with api-ref link
2b82aa3 [placement] Require at least one resource class in allocation
334905a [placement] Add test for empty resources in allocation
b2075bb Update PCI passthrough doc for moved options
b514f93 Fix nova assisted volume snapshots
98f0d81 libvirt: Fix getting a wrong guest object
ccfb464 conf: Allow users to unset 'keymap' options
99ab438 Skip test_rebuild_server_in_error_state for cells v1
c95f00a Remove host filter for _cleanup_running_deleted_instances 
periodic task


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 15:19:47 -0400 (-0400), Tony Breeds wrote:
> On Wed, Sep 20, 2017 at 06:59:50PM +, Jeremy Stanley wrote:
> > On 2017-09-20 14:46:32 -0400 (-0400), Tony Breeds wrote:
> > [...]
> > > I'd like to find a solution that doesn't need a tox_install.sh
> > [...]
> > 
> > This wart came up when discussing Zuul v3 job translations... what
> > would be ideal is if someone has the bandwidth to work upstream in
> > tox to add a constraints file option agreeable to its maintainer so
> > that we don't have to override install-command to add it in all our
> > configs.
> 
> I'm happy to work on it in tox but I don't really have an idea of how to
> handle Doug's ideal to avoid having to update the URL for each git
> branch.
[...]

Yes, it would help us get rid of ugly wrapper scripts in many
places, but unfortunately doesn't address needing to generate
patches to adjust URLs/filenames.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is there any reason to exclude originally failed build hosts during live migration?

2017-09-20 Thread melanie witt

On Wed, 20 Sep 2017 13:47:18 -0500, Matt Riedemann wrote:
Presumably there was a good reason why the instance failed to build on a 
host originally, but that could be for any number of reasons: resource 
claim failed during a race, configuration issues, etc. Since we don't 
really know what originally happened, it seems reasonable to not exclude 
originally attempted build targets since the scheduler filters should 
still validate them during live migration (this is all assuming you're 
not using the 'force' flag with live migration - and if you are, all 
bets are off).


Yeah, I think because an original failure to build could have been a 
failed claim during a race, config issue, or just been a very long time 
ago, we shouldn't continue to exclude those hosts forever.


If people agree with doing this fix, then we also have to consider 
making a similar fix for other move operations like cold migrate, 
evacuate and unshelve. However, out of those other move operations, only 
cold migrate attempts any retries. If evacuate or unshelve fail on the 
target host, there is no retry.


I agree with doing that fix for all of the move operations.

-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is there any reason to exclude originally failed build hosts during live migration?

2017-09-20 Thread Chris Friesen

On 09/20/2017 12:47 PM, Matt Riedemann wrote:


I wanted to bring it up here in case anyone had a good reason why we should not
continue to exclude originally failed hosts during live migration, even if the
admin is specifying one of those hosts for the live migration destination.

Presumably there was a good reason why the instance failed to build on a host
originally, but that could be for any number of reasons: resource claim failed
during a race, configuration issues, etc. Since we don't really know what
originally happened, it seems reasonable to not exclude originally attempted
build targets since the scheduler filters should still validate them during live
migration (this is all assuming you're not using the 'force' flag with live
migration - and if you are, all bets are off).


As you say, a failure on a host during the original instance creation (which 
could have been a long time ago) is not a reason to bypass that host during 
subsequent operations.


In other words, I think the list of hosts to ignore should be scoped to a single 
"operation" that requires scheduling (which would include any necessary 
rescheduling for that "operation").


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-announce] CFP for OpenStack Open Source Days Sydney

2017-09-20 Thread Russell Bryant
On Wed, Sep 20, 2017 at 3:21 PM, Ben Pfaff  wrote:

> In May, the OpenStack Summit graciously offered Open vSwitch use of a
> room for a day of talks, and it worked out great.  For the OpenStack
> Summit in Sydney, which will be held Nov. 6-8, they've again offered us
> time for Open vSwitch related talks, although this time, we only have
> have two 40-minute slots.
>
> If you're interested in speaking in Sydney, please send a title, list of
> speakers, and a brief abstract to ovs...@openvswitch.org by Sept. 27
> (that's only one week away!).  If we receive more quality proposals than
> the two slots allow, we will consider subdividing them into 20-minute
> pieces and we will prioritize talks that relate to OpenStack,
>
> Thanks,
>
> Ben.
> ___
> announce mailing list
> annou...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-announce
>



-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-20 Thread Waines, Greg
We are in the process of integrating MAGNUM into our OpenStack distribution.
We are working with NEWTON version of MAGNUM.
We have the MAGNUM processes up and running and configured.

However we are seeing the following error (see stack trace below) on virtually 
all MAGNUM CLI calls.

The code where the stack trace is triggered:
def add_policy_attributes(target):
"""Adds extra information for policy enforcement to raw target object"""
admin_context = context.make_admin_context()
admin_osc = clients.OpenStackClients(admin_context)
trustee_domain_id = admin_osc.keystone().trustee_domain_id
target['trustee_domain_id'] = trustee_domain_id
return target

( NOTE: that this code was introduced upstream as part of a fix for 
CVE-2016-7404:
 
https://github.com/openstack/magnum/commit/2d4e617a529ea12ab5330f12631f44172a623a14
 )

Stack Trace:
File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in 
callfunction
result = f(self, *args, **kwargs)

  File "", line 2, in get_all

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 130, in 
wrapper
exc=exception.PolicyNotAuthorized, action=action)

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 97, in 
enforce
#add_policy_attributes(target)

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 106, in 
add_policy_attributes
trustee_domain_id = admin_osc.keystone().trustee_domain_id

  File "/usr/lib/python2.7/site-packages/magnum/common/keystone.py", line 237, 
in trustee_domain_id
self.domain_admin_session

  File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 
136, in get_access
self.auth_ref = self.get_auth_ref(session)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", 
line 167, in get_auth_ref
authenticated=False, log=False, **rkwargs)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 681, 
in post
return self.request(url, 'POST', **kwargs)

  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in 
inner
return wrapped(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 570, 
in request
raise exceptions.from_response(resp, method, url)

NotFound: The resource could not be found. (HTTP 404)


Any ideas on what our issue could be ?
Or next steps to investigate ?

thanks in advance,
Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 06:59:50PM +, Jeremy Stanley wrote:
> On 2017-09-20 14:46:32 -0400 (-0400), Tony Breeds wrote:
> [...]
> > I'd like to find a solution that doesn't need a tox_install.sh
> [...]
> 
> This wart came up when discussing Zuul v3 job translations... what
> would be ideal is if someone has the bandwidth to work upstream in
> tox to add a constraints file option agreeable to its maintainer so
> that we don't have to override install-command to add it in all our
> configs.

I'm happy to work on it in tox but I don't really have an idea of how to
handle Doug's ideal to avoid having to update the URL for each git
branch.

That was my original motivator for doing something 'dynamic'[1] but I
can't really think of a nice generic solution.  We *could* have tox take
a template and expand it from the package data effectively doing my
suggestion from [1] without the shell.  That feels like over engineering
and doesn't accommodate OpenStack Hosted projects very well.


[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-August/120626.html
> -- 
> Jeremy Stanley



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 14:46:32 -0400 (-0400), Tony Breeds wrote:
[...]
> I'd like to find a solution that doesn't need a tox_install.sh
[...]

This wart came up when discussing Zuul v3 job translations... what
would be ideal is if someone has the bandwidth to work upstream in
tox to add a constraints file option agreeable to its maintainer so
that we don't have to override install-command to add it in all our
configs.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Sean Dague

On 09/20/2017 02:25 PM, Doug Hellmann wrote:

Excerpts from Dan Smith's message of 2017-09-20 10:09:54 -0700:

- Modify the `supports-upgrades`[3] and `supports-accessible-upgrades`[4] tags

I have yet to look into the formal process around making changes to
these tags but I will aim to make a start ASAP.


We've previously tried to avoid changing assert tag definitions because
we then have to re-review all of the projects that already have the tags
to ensure they meet the new criteria. It might be easier to add a new
tag for assert:supports-fast-forward-upgrades with the criteria that are
unique to this use case.


We already have a confusing array of upgrade tags, so I would really
rather not add more that overlap in complicated ways. Most of the change
here is clarification of things I think most people assume, so I don't
think the validation effort will be a lot of work.

--Dan


OK, I'll wait to see what the actual updates look like before commenting
further, but it sounds like working on the existing tag definitions is a
good first step.


Agreed. We're already at 5 upgrade tags now?

I think honestly we're going to need a picture to explain the 
differences between them. Based on the confusion that kept seeming to 
come during discussions at the PTG, I think we need to circle around and 
figure out if there are different ways to explain this to have greater 
clarity.


  -Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Is there any reason to exclude originally failed build hosts during live migration?

2017-09-20 Thread Matt Riedemann

It's a weird question, so I'll explain.

An issue came up in IRC today where someone was trying to live migrate 
an instance to a specified host, and the RetryFilter in the scheduler 
was kicking out the specified host, even though other similar instances 
were live migrating to that specified host successfully.


After some DB debugging, we figured out that the instance that failed to 
live migrate has a persisted request spec which listed the specified 
host as an originally attempted host during the initial instance create. 
The RetryFilter was tripping up on this during live migration saying, 
essentially, "you've already tried that host, sorry".


This was confusing because the live migration task in conductor actually 
manually handles retries if pre-migration checks fail on the selected 
destination host. This is why we have the "migrate_max_retries" config 
option.


The actual fix for this is trivial:

https://review.openstack.org/#/c/505771/

I wanted to bring it up here in case anyone had a good reason why we 
should not continue to exclude originally failed hosts during live 
migration, even if the admin is specifying one of those hosts for the 
live migration destination.


Presumably there was a good reason why the instance failed to build on a 
host originally, but that could be for any number of reasons: resource 
claim failed during a race, configuration issues, etc. Since we don't 
really know what originally happened, it seems reasonable to not exclude 
originally attempted build targets since the scheduler filters should 
still validate them during live migration (this is all assuming you're 
not using the 'force' flag with live migration - and if you are, all 
bets are off).


If people agree with doing this fix, then we also have to consider 
making a similar fix for other move operations like cold migrate, 
evacuate and unshelve. However, out of those other move operations, only 
cold migrate attempts any retries. If evacuate or unshelve fail on the 
target host, there is no retry.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 02:24:32PM -0400, Doug Hellmann wrote:

> That solves the problem of having the constraints file disappear after
> the EOL, but it doesn't really solve the problem of having to update the
> branches every time we open one. Having tox_install.sh figure out the
> URL from the .gitreview file addresses that, but it may be too magic for
> some folks. I don't know if we have generally agreed to let anything
> other than git-review look at that file.

Right, I really like the idea of working this out at runtime but I'd
like to find a solution that doesn't need a tox_install.sh or similar

Also using .gitreview doesn't help people that are using sdists or
(probably) packages.  I don't know if we want that to stop us but it's
something we need to discuss

[tony@thor tmp]$ wget -o /dev/null 
http://tarballs.openstack.org/nova/nova-master.tar.gz
[tony@thor tmp]$ tar tvf nova-master.tar.gz | grep git
-rw-rw-r-- jenkins/jenkins  2 2017-09-20 05:30 
nova-16.0.0.0rc2.dev393/nova/CA/projects/.gitignore
-rw-rw-r-- jenkins/jenkins  2 2017-09-20 05:30 
nova-16.0.0.0rc2.dev393/nova/CA/reqs/.gitignore
-rw-rw-r-- jenkins/jenkins121 2017-09-20 05:30 
nova-16.0.0.0rc2.dev393/nova/CA/.gitignore

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 14:37:27 -0400 (-0400), Doug Hellmann wrote:
> Excerpts from Jeremy Stanley's message of 2017-09-20 18:35:23 +:
> > On 2017-09-20 14:24:32 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > it doesn't really solve the problem of having to update the
> > > branches every time we open one. Having tox_install.sh figure out
> > > the URL from the .gitreview file addresses that, but it may be too
> > > magic for some folks. I don't know if we have generally agreed to
> > > let anything other than git-review look at that file.
> > 
> > Further, git-review has grown additional magic to allow it to figure
> > out sane values for defaultbranch when omitted from the .gitreview
> > file, so having other tools depend on that implementation detail
> > could come back to bite us down the road.
> 
> It sounds like we want git-review to have a mode that prints the
> branch to which the change would be proposed, without actually
> proposing anything. Our scripts could then use that.

"Our scripts" maybe... but calling it from tox.ini to guess branches
adds an implicit dependency on git-review for every project doing
that, at least for developers who want to be able to make use of the
constraints URL fallback.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 02:09:15PM -0400, Tony Breeds wrote:
> On Wed, Sep 20, 2017 at 01:43:51PM -0400, Tony Breeds wrote:
> 
> > The solution I thought we decide on at the PTG is:
> >  * Add a post job to all branches that publish a constraints/$series.txt
> >to $server (I don't mind if it's releases.o.o or tarballs.o.o).
> 
> Actually we might be better to do this daily from the periodic pipeline.
> In our CI we always gate with what is in git so that wouldn't be
> impacted.  The question is do we need external consumers to be "up to
> the minute" or is a days lag acceptable?
> 
> I kinda feel like it's okay to be a little laggy.

https://review.openstack.org/#/q/topic:tools/publish-constraints

Comments welcome.

Once those (or similar) land I'll happily update the release tools and
optionally $project repos.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-09-20 18:35:23 +:
> On 2017-09-20 14:24:32 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > it doesn't really solve the problem of having to update the
> > branches every time we open one. Having tox_install.sh figure out
> > the URL from the .gitreview file addresses that, but it may be too
> > magic for some folks. I don't know if we have generally agreed to
> > let anything other than git-review look at that file.
> 
> Further, git-review has grown additional magic to allow it to figure
> out sane values for defaultbranch when omitted from the .gitreview
> file, so having other tools depend on that implementation detail
> could come back to bite us down the road.

It sounds like we want git-review to have a mode that prints the
branch to which the change would be proposed, without actually
proposing anything. Our scripts could then use that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 14:24:32 -0400 (-0400), Doug Hellmann wrote:
[...]
> it doesn't really solve the problem of having to update the
> branches every time we open one. Having tox_install.sh figure out
> the URL from the .gitreview file addresses that, but it may be too
> magic for some folks. I don't know if we have generally agreed to
> let anything other than git-review look at that file.

Further, git-review has grown additional magic to allow it to figure
out sane values for defaultbranch when omitted from the .gitreview
file, so having other tools depend on that implementation detail
could come back to bite us down the road.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Matthew Thode
On 17-09-20 13:43:51, Tony Breeds wrote:
> On Wed, Sep 20, 2017 at 01:08:45PM -0400, Doug Hellmann wrote:
> > Excerpts from Jeremy Stanley's message of 2017-09-20 13:36:38 +:
> > > On 2017-09-20 08:41:14 -0400 (-0400), Doug Hellmann wrote:
> > > [...]
> > > > Is there any reason not to use the published files for all regular
> > > > builds, instead of having tox.ini point to a git URL? Maybe only for
> > > > regular builds off of stable branches?
> > > 
> > > I'm not sure what you mean by "regular builds" but the plan as I
> > 
> > s/regular/non-CI/
> > 
> > > understood it was to switch from git.o.o URLs to releases.o.o URLs
> > > in the tox.ini files in all branches of projects already consuming
> > > constraints files that way.
> > > 
> > > As early as now (if we already had the publication job in place) we
> > > could update them all in master branches to retrieve something like
> > > https://releases.openstack.org/constraints/queens-upper-constraints.txt
> > > and then once stable/queens is branched in all repos (including the
> > > requirements repo), switch the job to begin publishing to a URL for
> > > rocky and push tox.ini updates to master branches switching the URL
> > > to that as early in the cycle as possible. Alternatively, we could
> > > publish master and queens copies (identical initially) and expect
> > > the master branch tox.ini files to refer to master but then switch
> > > them to queens on the stable/queens branch during RC. It just comes
> > > down to which the stable/requirements/release teams think makes the
> > > most sense from a procedural perspective.
> > 
> > We should think through which of those approaches is going to result in
> > the least amount of synchronization. We do have a window in which to
> > make changes in a given branch for consuming projects, but making the
> > relevant changes in the requirements repository seems like it could be
> > error prone, especially because we try to branch it after the other
> > repositories.
> 
> The solution I thought we decide on at the PTG is:
>  * Add a post job to all branches that publish a constraints/$series.txt
>to $server (I don't mind if it's releases.o.o or tarballs.o.o).
>  * On the master branch we publish both master.txt and $series.txt
>(currently queens.txt) when we fork rocky from master we update the
>publish job to publish master and rocky.txt.As Doug points out
>there is a timing race here but it;s much smaller than the one we
>have now.
>  * We update the projects to use the static (non-git) URL for the
>constraints files.
>  * We update the release tools to generate the new style URL.
> 
> Optionally, and this requires discussion, we use a custom tox_install.sh
> to extract the branch from .gitreview and generate the URL which would
> remove the need for the last step.  There are clearly pros and cons to
> that so I'm not advocating for it now.
> 
> Those constraints files would never be removed but they'd stop getting
> updated when we EOL the requirements branch.
> 
> How does that sound?
> 
> Yours Tony.

That's correct, I haven't had time this week to create the jobs quite
yet, but hope to do so either this weekend or next week.

It's #3 in our list of items to generate tasks/bugs from.
https://etherpad.openstack.org/p/queens-PTG-requirements

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Doug Hellmann
Excerpts from Dan Smith's message of 2017-09-20 10:09:54 -0700:
> >> - Modify the `supports-upgrades`[3] and `supports-accessible-upgrades`[4] 
> >> tags
> >>
> >>I have yet to look into the formal process around making changes to
> >>these tags but I will aim to make a start ASAP.
> > 
> > We've previously tried to avoid changing assert tag definitions because
> > we then have to re-review all of the projects that already have the tags
> > to ensure they meet the new criteria. It might be easier to add a new
> > tag for assert:supports-fast-forward-upgrades with the criteria that are
> > unique to this use case.
> 
> We already have a confusing array of upgrade tags, so I would really 
> rather not add more that overlap in complicated ways. Most of the change 
> here is clarification of things I think most people assume, so I don't 
> think the validation effort will be a lot of work.
> 
> --Dan

OK, I'll wait to see what the actual updates look like before commenting
further, but it sounds like working on the existing tag definitions is a
good first step.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2017-09-20 13:43:51 -0400:
> On Wed, Sep 20, 2017 at 01:08:45PM -0400, Doug Hellmann wrote:
> > Excerpts from Jeremy Stanley's message of 2017-09-20 13:36:38 +:
> > > On 2017-09-20 08:41:14 -0400 (-0400), Doug Hellmann wrote:
> > > [...]
> > > > Is there any reason not to use the published files for all regular
> > > > builds, instead of having tox.ini point to a git URL? Maybe only for
> > > > regular builds off of stable branches?
> > > 
> > > I'm not sure what you mean by "regular builds" but the plan as I
> > 
> > s/regular/non-CI/
> > 
> > > understood it was to switch from git.o.o URLs to releases.o.o URLs
> > > in the tox.ini files in all branches of projects already consuming
> > > constraints files that way.
> > > 
> > > As early as now (if we already had the publication job in place) we
> > > could update them all in master branches to retrieve something like
> > > https://releases.openstack.org/constraints/queens-upper-constraints.txt
> > > and then once stable/queens is branched in all repos (including the
> > > requirements repo), switch the job to begin publishing to a URL for
> > > rocky and push tox.ini updates to master branches switching the URL
> > > to that as early in the cycle as possible. Alternatively, we could
> > > publish master and queens copies (identical initially) and expect
> > > the master branch tox.ini files to refer to master but then switch
> > > them to queens on the stable/queens branch during RC. It just comes
> > > down to which the stable/requirements/release teams think makes the
> > > most sense from a procedural perspective.
> > 
> > We should think through which of those approaches is going to result in
> > the least amount of synchronization. We do have a window in which to
> > make changes in a given branch for consuming projects, but making the
> > relevant changes in the requirements repository seems like it could be
> > error prone, especially because we try to branch it after the other
> > repositories.
> 
> The solution I thought we decide on at the PTG is:
>  * Add a post job to all branches that publish a constraints/$series.txt
>to $server (I don't mind if it's releases.o.o or tarballs.o.o).
>  * On the master branch we publish both master.txt and $series.txt
>(currently queens.txt) when we fork rocky from master we update the
>publish job to publish master and rocky.txt.As Doug points out
>there is a timing race here but it;s much smaller than the one we
>have now.

Yes, I think that would work. It adds another manual step to the release
process (to update the job) but that can technically be done as soon as
we know the next release name because we can publish from master to as
many different future names as we want.

We're starting to have a fair amount of automation that relies on
knowing the names of the release series and their statuses. I wonder if
we can produce a central library of some sort to give us that
information in a consumable format so we only have to update it in one
place.

>  * We update the projects to use the static (non-git) URL for the
>constraints files.
>  * We update the release tools to generate the new style URL.
> 
> Optionally, and this requires discussion, we use a custom tox_install.sh
> to extract the branch from .gitreview and generate the URL which would
> remove the need for the last step.  There are clearly pros and cons to
> that so I'm not advocating for it now.
> 
> Those constraints files would never be removed but they'd stop getting
> updated when we EOL the requirements branch.
> 
> How does that sound?

That solves the problem of having the constraints file disappear after
the EOL, but it doesn't really solve the problem of having to update the
branches every time we open one. Having tox_install.sh figure out the
URL from the .gitreview file addresses that, but it may be too magic for
some folks. I don't know if we have generally agreed to let anything
other than git-review look at that file.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 01:43:51PM -0400, Tony Breeds wrote:

> The solution I thought we decide on at the PTG is:
>  * Add a post job to all branches that publish a constraints/$series.txt
>to $server (I don't mind if it's releases.o.o or tarballs.o.o).

Actually we might be better to do this daily from the periodic pipeline.
In our CI we always gate with what is in git so that wouldn't be
impacted.  The question is do we need external consumers to be "up to
the minute" or is a days lag acceptable?

I kinda feel like it's okay to be a little laggy.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][charms][fuel][Packaging-Deb] Remove old numeric branches

2017-09-20 Thread Tony Breeds
Hello all,
Now we have cleaned up some of the older series based branches I've
taken a look at some of the numeric based branches.

The projects in $subject are particularly impacted.  I've made my best
guess at which branches are current, but I'd really appreciate guidance
on the life span of these branches.

As before as the ultimate consumer of the data is infra I've presented
this in a form that's easy for them to consume:

Ideally I'd like to move on this this week but that isn't enough
consultation.  So I'll pass the list to infra Monday 2nd October
---
eol-branch.sh -- stable/0.10 0.10-eol \
 openstack/fuel-plugin-elasticsearch-kibana \
 openstack/fuel-plugin-influxdb-grafana \
 openstack/fuel-plugin-lma-collector \
 openstack/fuel-plugin-lma-infrastructure-alerting
eol-branch.sh -- stable/0.17.0 0.17.0-eol openstack/bandit
eol-branch.sh -- stable/0.17.0 0.17.0-eol-dpkg openstack/deb-bandit
eol-branch.sh -- stable/0.2 0.2-eol openstack/yaql
eol-branch.sh -- stable/0.2 0.2-eol-dpkg openstack/deb-python-yaql
eol-branch.sh -- stable/0.7 0.7-eol \
 openstack/fuel-plugin-elasticsearch-kibana \
 openstack/fuel-plugin-influxdb-grafana \
 openstack/fuel-plugin-lma-collector
eol-branch.sh -- stable/0.8 0.8-eol \
 openstack/fuel-plugin-elasticsearch-kibana \
 openstack/fuel-plugin-influxdb-grafana \
 openstack/fuel-plugin-lma-collector \
 openstack/fuel-plugin-lma-infrastructure-alerting
eol-branch.sh -- stable/0.9 0.9-eol \
 openstack/fuel-plugin-elasticsearch-kibana \
 openstack/fuel-plugin-influxdb-grafana \
 openstack/fuel-plugin-lma-collector \
 openstack/fuel-plugin-lma-infrastructure-alerting \
 openstack/rally
eol-branch.sh -- stable/0.9 0.9-eol-dpkg openstack/deb-rally
eol-branch.sh -- stable/1.0 1.0-eol \
 openstack/fuel-plugin-elasticsearch-kibana \
 openstack/fuel-plugin-external-lb \
 openstack/fuel-plugin-influxdb-grafana \
 openstack/fuel-plugin-lma-collector \
 openstack/fuel-plugin-lma-infrastructure-alerting \
 openstack/fuel-plugin-neutron-vpnaas \
 openstack/fuel-plugin-scaleio \
 openstack/fuel-plugin-scaleio-cinder
eol-branch.sh -- stable/1.0 1.0-eol-dpkg openstack/deb-gnocchi 
openstack/deb-python-gnocchiclient
eol-branch.sh -- stable/1.0.0 1.0.0-eol \
 openstack/fuel-plugin-kafka openstack/fuel-plugin-mellanox \
 openstack/fuel-plugin-nova-nfs openstack/fuel-plugin-tls \
 openstack/fuel-plugin-vxlan
eol-branch.sh -- stable/1.0.1 1.0.1-eol 
openstack/fuel-plugin-openstack-telemetry
eol-branch.sh -- stable/1.0.2 1.0.2-eol openstack/fuel-plugin-ceilometer-redis
eol-branch.sh -- stable/1.0.x 1.0.x-eol-dpkg openstack/deb-python-django-pyscss
eol-branch.sh -- stable/1.1 1.1-eol-dpkg openstack/deb-gnocchi
eol-branch.sh -- stable/1.2 1.2-eol-dpkg openstack/deb-gnocchi
eol-branch.sh -- stable/1.2.0 1.2.0-eol \
 openstack/freezer openstack/freezer-api \
 openstack/fuel-plugin-cinder-netapp
eol-branch.sh -- stable/1.3 1.3-eol-dpkg openstack/deb-gnocchi
eol-branch.sh -- stable/10.0 10.0-eol openstack/fuel-plugin-ovs
eol-branch.sh -- stable/14.04 14.04-eol \
 openstack/charm-neutron-api-plumgrid \
 openstack/charm-plumgrid-director \
 openstack/charm-plumgrid-edge \
 openstack/charm-plumgrid-gateway
eol-branch.sh -- stable/2.0 2.0-eol openstack/fuel-plugin-contrail
eol-branch.sh -- stable/2.0 2.0-eol-dpkg openstack/deb-gnocchi 
openstack/deb-python-gnocchiclient
eol-branch.sh -- stable/2.0.0 2.0.0-eol \
 openstack/fuel-plugin-cinder-netapp \
 openstack/fuel-plugin-mellanox \
 openstack/fuel-plugin-midonet
eol-branch.sh -- stable/2.1 2.1-eol openstack/fuel-plugin-contrail
eol-branch.sh -- stable/2.1 2.1-eol-dpkg openstack/deb-gnocchi 
openstack/deb-python-gnocchiclient
eol-branch.sh -- stable/2.1.0 2.1.0-eol openstack/fuel-plugin-midonet
eol-branch.sh -- stable/2.2 2.2-eol openstack/fuel-plugin-contrail
eol-branch.sh -- stable/2.2 2.2-eol-dpkg openstack/deb-gnocchi 
openstack/deb-python-gnocchiclient
eol-branch.sh -- stable/2.2.0 2.2.0-eol openstack/fuel-plugin-midonet
eol-branch.sh -- stable/2.6 2.6-eol-dpkg openstack/deb-python-gnocchiclient
eol-branch.sh -- stable/3.0 3.0-eol openstack/fuel-plugin-contrail 
openstack/fuel-plugin-nsxv
eol-branch.sh -- stable/3.0 3.0-eol-dpkg openstack/deb-gnocchi
eol-branch.sh -- stable/3.0.0 3.0.0-eol openstack/fuel-plugin-mellanox
eol-branch.sh -- stable/3.0.1 3.0.1-eol \
 openstack/fuel-docs openstack/fuel-plugin-contrail \
 openstack/fuel-plugin-midonet

Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 01:08:45PM -0400, Doug Hellmann wrote:
> Excerpts from Jeremy Stanley's message of 2017-09-20 13:36:38 +:
> > On 2017-09-20 08:41:14 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > Is there any reason not to use the published files for all regular
> > > builds, instead of having tox.ini point to a git URL? Maybe only for
> > > regular builds off of stable branches?
> > 
> > I'm not sure what you mean by "regular builds" but the plan as I
> 
> s/regular/non-CI/
> 
> > understood it was to switch from git.o.o URLs to releases.o.o URLs
> > in the tox.ini files in all branches of projects already consuming
> > constraints files that way.
> > 
> > As early as now (if we already had the publication job in place) we
> > could update them all in master branches to retrieve something like
> > https://releases.openstack.org/constraints/queens-upper-constraints.txt
> > and then once stable/queens is branched in all repos (including the
> > requirements repo), switch the job to begin publishing to a URL for
> > rocky and push tox.ini updates to master branches switching the URL
> > to that as early in the cycle as possible. Alternatively, we could
> > publish master and queens copies (identical initially) and expect
> > the master branch tox.ini files to refer to master but then switch
> > them to queens on the stable/queens branch during RC. It just comes
> > down to which the stable/requirements/release teams think makes the
> > most sense from a procedural perspective.
> 
> We should think through which of those approaches is going to result in
> the least amount of synchronization. We do have a window in which to
> make changes in a given branch for consuming projects, but making the
> relevant changes in the requirements repository seems like it could be
> error prone, especially because we try to branch it after the other
> repositories.

The solution I thought we decide on at the PTG is:
 * Add a post job to all branches that publish a constraints/$series.txt
   to $server (I don't mind if it's releases.o.o or tarballs.o.o).
 * On the master branch we publish both master.txt and $series.txt
   (currently queens.txt) when we fork rocky from master we update the
   publish job to publish master and rocky.txt.As Doug points out
   there is a timing race here but it;s much smaller than the one we
   have now.
 * We update the projects to use the static (non-git) URL for the
   constraints files.
 * We update the release tools to generate the new style URL.

Optionally, and this requires discussion, we use a custom tox_install.sh
to extract the branch from .gitreview and generate the URL which would
remove the need for the last step.  There are clearly pros and cons to
that so I'm not advocating for it now.

Those constraints files would never be removed but they'd stop getting
updated when we EOL the requirements branch.

How does that sound?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO/Ansible PTG session

2017-09-20 Thread James Slagle
On Tue, Sep 19, 2017 at 8:37 AM, Giulio Fidente  wrote:
> On 09/18/2017 05:37 PM, James Slagle wrote:
>> - The entire sequence and flow is driven via Mistral on the Undercloud
>> by default. This preserves the API layer and provides a clean reusable
>> interface for the CLI and GUI.
>
> I think it's worth saying that we want to move the deployment steps out
> of heat and in ansible, not in mistral so that mistral will run the
> workflow only once and let ansible go through the steps
>
> I think having the steps in mistral would be a nice option to be able to
> rerun easily a particular deployment step from the GUI, versus having
> them in ansible which is instead a better option for CLI users ... but
> it looks like having them in ansible is the only option which permits us
> to reuse the same code to deploy an undercloud because having the steps
> in mistral would require the undercloud installation itself to depend on
> mistral which we don't want to
>
> James, Dan, please comment on the above if I am wrong

That's correct. We don't want to require Mistral to install the
Undercloud. However, I don't think that necessarily means it has to be
a single call to ansible-playbook. We could have multiple invocations
of ansible-playbook. Both Mistral and CLI code for installing the
undercloud could handle that easily.

You wouldn't be able to interleave an external playbook among the
deploy steps however. That would have to be done under a single call
to ansible-playbook (at least how that is written now). We could
however have hooks that could serve as integration points to call
external playbooks after each step.

>> - It would still be possible to run ansible-playbook directly for
>> various use cases (dev/test/POC/demos). This preserves the quick
>> iteration via Ansible that is often desired.
>>
>> - The remaining SoftwareDeployment resources in tripleo-heat-templates
>> need to be supported by config download so that the entire
>> configuration can be driven with Ansible, not just the deployment
>> steps. The success criteria for this point would be to illustrate
>> using an image that does not contain a running os-collect-config.
>>
>> - The ceph-ansible implementation done in Pike could be reworked to
>> use this model. "config download" could generate playbooks that have
>> hooks for calling external playbooks, or those hooks could be
>> represented in the templates directly. The result would be the same
>> either way though in that Heat would no longer be triggering a
>> separate Mistral workflow just for ceph-ansible.
>
> I'd say for ceph-ansible, kubernetes and in general anything else which
> needs to run with a standard playbook installed on the undercloud and
> not one generated via the heat templates... these "external" services
> usually require the inventory file to be in different format, to
> describe the hosts to use on a per-service basis, not per-role (and I
> mean tripleo roles here, not ansible roles obviously)
>
> About that, we discussed a more long term vision where the playbooks
> (static data) needd to describe how to deploy/upgrade a given service is
> in a separate repo (like tripleo-apb) and we "compose" from heat the
> list of playbooks to be executed based on the roles/enabled services; in
> this scenario we'd be much closer to what we had to do for ceph-ansible
> and I feel like that might finally allow us merge back the ceph
> deployment (or kubernetes deployment) process into the more general
> approach driven by tripleo
>
> James, Dan, comments?

Agreed, I think this is the longer term plan in regards to using
APB's, where everything consumed is an external playbook/role.

We definitely want to consider this plan in parallel with the POC work
that Flavio is pulling together and make sure that they are aligned so
that we're not constantly reworking the framework.

I've not yet had a chance to review the material he sent out this
morning, but perhaps we could work together to update the sequence
diagram to also have a "future" state to indicate where we are going
and what it would look like with APB's and external paybooks.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Dan Smith

- Modify the `supports-upgrades`[3] and `supports-accessible-upgrades`[4] tags

   I have yet to look into the formal process around making changes to
   these tags but I will aim to make a start ASAP.


We've previously tried to avoid changing assert tag definitions because
we then have to re-review all of the projects that already have the tags
to ensure they meet the new criteria. It might be easier to add a new
tag for assert:supports-fast-forward-upgrades with the criteria that are
unique to this use case.


We already have a confusing array of upgrade tags, so I would really 
rather not add more that overlap in complicated ways. Most of the change 
here is clarification of things I think most people assume, so I don't 
think the validation effort will be a lot of work.


--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-09-20 13:36:38 +:
> On 2017-09-20 08:41:14 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > Is there any reason not to use the published files for all regular
> > builds, instead of having tox.ini point to a git URL? Maybe only for
> > regular builds off of stable branches?
> 
> I'm not sure what you mean by "regular builds" but the plan as I

s/regular/non-CI/

> understood it was to switch from git.o.o URLs to releases.o.o URLs
> in the tox.ini files in all branches of projects already consuming
> constraints files that way.
> 
> As early as now (if we already had the publication job in place) we
> could update them all in master branches to retrieve something like
> https://releases.openstack.org/constraints/queens-upper-constraints.txt
> and then once stable/queens is branched in all repos (including the
> requirements repo), switch the job to begin publishing to a URL for
> rocky and push tox.ini updates to master branches switching the URL
> to that as early in the cycle as possible. Alternatively, we could
> publish master and queens copies (identical initially) and expect
> the master branch tox.ini files to refer to master but then switch
> them to queens on the stable/queens branch during RC. It just comes
> down to which the stable/requirements/release teams think makes the
> most sense from a procedural perspective.

We should think through which of those approaches is going to result in
the least amount of synchronization. We do have a window in which to
make changes in a given branch for consuming projects, but making the
relevant changes in the requirements repository seems like it could be
error prone, especially because we try to branch it after the other
repositories.

In any case, I like whichever approach can be made to work and will
leave bikeshedding on URLs paths to the reviews that implement the
one we pick.

If it would be useful, I would be happy to help advise anyone who
wants to modify the release automation scripts to handle this new
case.

Doug

> Remember also that the timing on this doesn't require extreme
> precision and there are no chicken-and-egg/catch-22 problems
> associated with updating because the URLs in question are merely a
> fallback method for when the constraints file is not already
> provided locally. In the CI system, we directly provide constraints
> files so that we can respect depends-on to requirements repo changes
> and the like, so in practice this fallback is primarily for the
> convenience of developers running tox locally.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Doug Hellmann
Excerpts from Lee Yarwood's message of 2017-09-20 14:29:12 +0100:
> My thanks again to everyone who attended and contributed to the
> skip-level upgrades track over the first two days of last weeks PTG.
> I've included a short summary of our discussions below with a list of
> agreed actions for Queens at the end.
> 
> tl;dr s/skip-level/fast-forward/g
> 
> https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades
> 
> Monday - Define and rename
> --
> 
> During our first session [1] we briefly discussed the history of the
> skip-level upgrades effort within the community and the various
> misunderstandings that have arisen from previous conversations around
> this topic at past events.
> 
> We agreed that at present the only way to perform upgrades between N and
> N+>=2 releases of OpenStack was to upgrade linearly through each major
> release, without skipping between the starting and target release of the
> upgrade.
> 
> This is contrary to previous discussions on the topic where it had been
> suggested that releases could be skipped if DB migrations for these
> releases were applied in bulk later in the process. As projects within
> the community currently offer no such support for this it was agreed to
> continue to use the supported N to N+1 upgrade jumps, albeit in a
> minimal, offline way.
> 
> The name skip-level upgrades has had an obvious role to play in the
> confusion here and as such the renaming of this effort was discussed at
> length. Various suggestions are listed on the pad but for the time being
> I'm going to stick with the basic `fast-forward upgrades` name (FFOU,
> OFF, BOFF, FFUD etc were all close behind). This removes any notion of
> releases being skipped and should hopefully avoid any further confusion
> in the future.
>  
> Support by the projects for offline upgrades was then discussed with a
> recent Ironic issue [2] highlighted as an example where projects have
> required services to run before the upgrade could be considered
> complete. The additional requirement of ensuring both workloads and the
> data plane remain active during the upgrade was also then discussed. It
> was agreed that both the `supports-upgrades` [3] and
> `supports-accessible-upgrades` [4] tags should be updated to reflect
> these requirements for fast-forward upgrades.
> 
> Given the above it was agreed that this new definition of what
> fast-forward upgrades are and the best practices associated with them
> should be clearly documented somewhere. Various operators in the room
> highlighted that they would like to see a high level document outline
> the steps required to achieve this, hopefully written by someone with
> past experience of running this type of upgrade.
> 
> I failed to capture the names of the individuals who were interested in
> helping out here. If anyone is interested in helping out here please
> feel free to add your name to the actions either at the end of this mail
> or at the bottom of the pad.
> 
> In the afternoon we reviewed the current efforts within the community to
> implement fast-forward upgrades, covering TripleO, Charms (Juju) and
> openstack-ansible. While this was insightful to many in the room there
> didn't appear to be any obvious areas of collaboration outside of
> sharing best practice and defining the high level flow of a fast-forward
> upgrade.
> 
> Tuesday - NFV, SIG and actions
> --
> 
> Tuesday started with a discussion around NFV considerations with
> fast-forward upgrades. These ranged from the previously mentioned need
> for the data plane to remain active during the upgrade to the restricted
> nature of upgrades in NFV environments in terms of time and number of
> reboots.
> 
> It was highlighted that there are some serious as yet unresolved bugs in
> Nova regarding the live migration of instances using SR-IOV devices.
> This currently makes the moving of workloads either prior to or during
> the upgrade particularly difficult.
> 
> Rollbacks were also discussed and the need for any best practice
> documentation around fast-forward upgrades to include steps to allow the
> recovery of environments if things fail was also highlighted.
> 
> We then revisited an idea from the first day of finding or creating a
> SIG for this effort to call home. It was highlighted that there was a
> suggestion in the packaging room to create a Deployment / Lifecycle SIG.
> After speaking with a few individuals later in the week I've taken the
> action to reach out on the openstack-sigs mailing list for further
> input.
> 
> Finally, during a brief discussion on ways we could collaborate and share
> tooling for fast-forward upgrades a new tool to migrate configuration
> files between N to N+>=2 releases was introduced [5]. While interesting
> it was seen as a more generic utility that could also be used between N
> to N+1 upgrades.  AFAIK the authors joined the Oslo room shortly after
> this session ended to gain more 

Re: [openstack-dev] [neutron] MTU native ovs firewall driver

2017-09-20 Thread Ihar Hrachyshka
On Wed, Sep 20, 2017 at 9:33 AM, Ajay Kalambur (akalambu)
 wrote:
> So I was forced to explicitly set the MTU on br-int
> ovs-vsctl set int br-int mtu_request=9000
>
>
> Without this the tap device added to br-int would get MTU 1500
>
> Would this be something the ovs l2 agent can handle since it creates the 
> bridge?

Yes, I guess we could do that if it fixes your problem. The issue
stems from the fact that we use a single bridge for different networks
with different MTUs, and it does break some assumptions kernel folks
make about a switch (that all attached ports steer traffic in the same
l2 domain, which is not the case because of flows we set). You may
want to report a bug against Neutron and we can then see how to handle
that. I will probably not be as simple as setting the value to 9000
because different networks have different MTUs, and plugging those
mixed ports in the same bridge may trigger MTU updates on unrelated
tap devices. We will need to test how kernel behaves then.

Also, you may be interested in reviewing an old openvswitch-dev@
thread that I once started here:
https://mail.openvswitch.org/pipermail/ovs-dev/2016-June/316733.html
Sadly, I never followed up with a test scenario that wouldn't involve
OpenStack, for OVS folks to follow up on, so it never moved anywhere.

Cheers,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] MTU native ovs firewall driver

2017-09-20 Thread Ajay Kalambur (akalambu)
So I was forced to explicitly set the MTU on br-int
ovs-vsctl set int br-int mtu_request=9000


Without this the tap device added to br-int would get MTU 1500

Would this be something the ovs l2 agent can handle since it creates the bridge?

Ajay




On 9/20/17, 7:28 AM, "Ajay Kalambur (akalambu)"  wrote:

>Hi
>i am using large mtu setting of 9000. with the hybrid firewall driver i see 
>large mtu set on both tap devices and the linuxbridges
>While i switch from ovs hybrid firewall driver to native ovs firewall driver i 
>now see the tap device gets the default 1500 mtu
>i have the right setting for mtu in global physnet mtu in neutron.conf and 
>path mtu in ml2 conf
>Do i need to do anything different to get large mtu working with native ovs 
>firewall driver
>Ajay
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread Michael Still
Dims, I'm not sure that's actually possible though. Many of these files
have been through rewrites and developed over a large number of years.
Listing all authors isn't practical.

Given the horse has bolted on forking these files, I feel like a comment
acknowledging the original source file is probably sufficient.

What is concerning to me is that some of these files are part of the "ABI"
of nova, and if mogan diverges from that then I think we're going to see
user complaints in the future. Specifically configdrive, and metadata seem
like examples of this. I don't want to see us end up in another "managed
cut and paste" like early oslo where nova continues to develop these and
mogan doesn't notice the changes.

I'm not sure how we resolve that. One option would be to refactor these
files into a shared library.

Michael




On Wed, Sep 20, 2017 at 5:51 AM, Davanum Srinivas  wrote:

> Zhenguo,
>
> Thanks for bringing this up.
>
> For #1, yes please indicate which file from Nova, so if anyone wanted
> to cross check for fixes etc can go look in Nova
> For #2, When you pick up a commit from Nova, please make sure the
> commit message in Mogan has the following
>* The gerrit change id(s) of the original commit, so folks can
> easily go find the original commit in gerritt
>* Add "Co-Authored-By:" tags for each author in the original commit
> so they get credit
>
> Also, Please make sure you do not alter any copyright or license
> related information in the header when you first copy a file from
> another project.
>
> Thanks,
> Dims
>
> On Wed, Sep 20, 2017 at 4:20 AM, Zhenguo Niu 
> wrote:
> > Hi all,
> >
> > I'm from Mogan team, we copied some codes/frameworks from Nova since we
> want
> > to be a Nova with a bare metal specific API.
> > About why reinventing the wheel, you can find more informations here [1].
> >
> > I would like to know what's the decent way to show our respect to the
> > original authors we copied from.
> >
> > After discussing with the team, we plan to do some improvements as below:
> >
> > 1. Adds some comments to the beginning of such files to indicate that
> they
> > leveraged the implementation of Nova.
> >
> > https://github.com/openstack/mogan/blob/master/mogan/
> baremetal/ironic/driver.py#L19
> > https://github.com/openstack/mogan/blob/master/mogan/
> console/websocketproxy.py#L17-L18
> > https://github.com/openstack/mogan/blob/master/mogan/
> consoleauth/manager.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> engine/configdrive.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> engine/metadata.py#L18
> > https://github.com/openstack/mogan/blob/master/mogan/network/api.py#L18
> > https://github.com/openstack/mogan/blob/master/mogan/
> objects/aggregate.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> objects/keypair.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> objects/server_fault.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> objects/server_group.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> scheduler/client/report.py#L17
> > https://github.com/openstack/mogan/blob/master/mogan/
> scheduler/filter_scheduler.py#L17
> >
> > 2. For the changes we follows what nova changed, should reference to the
> > original authors in the commit messages.
> >
> >
> > Please let me know if there are something else we need to do or there are
> > already some existing principles we can follow, thanks!
> >
> >
> >
> > [1] https://wiki.openstack.org/wiki/Mogan
> >
> >
> > --
> > Best Regards,
> > Zhenguo Niu
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] dashboard query changes since upgrade

2017-09-20 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-09-19 09:35:26 -0400:
> On 09/19/2017 09:00 AM, Sean Dague wrote:
> > I've been iterating through gerrit dashboards this morning trying to
> > figure out why they no longer show any changes.
> > 
> > It looks like the query: label:Code-Review>=-2,self now matches changes
> > you haven't voted on. Previously (probably to a bug) this only matched
> > patches where you had a -2,-1,+1,+2 vote on them.
> > 
> > I'll be poking around today to figure out what the options are to get
> > equivalent functionality is out of the system, then update the gerrit
> > dashboards in gerrit-dash-creator based on that.
> 
> It appears that reviewedby:self actually works like we were hacking the
> old one to work. I've pushed through a bulk fix for most of the
> dashboards here - https://review.openstack.org/#/c/505247/
> 
> However local versions will need local patching.
> 
> -Sean
> 

I updated the TC review dashboard in https://review.openstack.org/505723
to produce:

https://review.openstack.org/#/dashboard/?foreach=project%3Aopenstack%2Fgovernance+is%3Aopen=Technical+Committee+Inbox+items=owner%3Aself+Vote+Items+I+have+not+voted+in+yet=topic%3Aformal%2Dvote+NOT+reviewedby%3Aself+NOT+owner%3Aself+Vote+Items=topic%3Aformal%2Dvote+Items+I+Haven%27t+Voted+On=path%3A%5Egoals%2F.%2A+NOT+reviewedby%3Aself+NOT+owner%3Aself+Items=path%3A%5Egoals%2F.%2A+Haven%27t+Voted+on+this+Draft=NOT+reviewedby%3Aself+NOT+owner%3Aself+at+Least+One+Objection=NOT+reviewedby%3Aself+NOT+owner%3Aself+label%3ACode%2DReview%3C%3D%2D1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [ironic-ui] Proposing Anup Navare (anupn) for ironic-ui-core

2017-09-20 Thread Beth Elwell
++ I think Anup would be a great addition to the core team :) Thank you for all 
your contributions!

Beth

> On 20 Sep 2017, at 01:16, Julia Kreger  wrote:
> 
> Greetings fellow stackers!
> 
> I would like to propose adding Anup Navare to ironic-ui-core. Anup has
> been involved with ironic-ui this past cycle as well as various other
> areas across ironic which gives me further confidence in making this
> proposal.
> 
> I have already informally polled the existing ironic-ui-core members,
> to which everyone has responded positively. If there are no
> objections, I'll make the change in one week.
> 
> -Julia
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][unit test] Any python utils can collect pci info?

2017-09-20 Thread Hongbin Lu
Hi Eric,

Thanks for pointing this out. This BP 
(https://blueprints.launchpad.net/zun/+spec/use-privsep) was created to track 
the introduction of privsep.

Best regards,
Hongbin

> -Original Message-
> From: Eric Fried [mailto:openst...@fried.cc]
> Sent: September-18-17 10:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [zun][unit test] Any python utils can
> collect pci info?
> 
> You may get a little help from the methods in nova.pci.utils.
> 
> If you're calling out to lspci or accessing sysfs, be aware of this
> series [1] and do it via the new privsep mechanisms.
> 
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/nova+bra
> nch:master+topic:hurrah-for-privsep
> 
> On 09/17/2017 09:41 PM, Hongbin Lu wrote:
> > Hi Shunli,
> >
> >
> >
> > I am not aware of any prevailing python utils for this. An
> alternative
> > is to shell out Linux commands to collect the information. After a
> > quick search, it looks xenapi [1] uses “lspci -vmmnk” to collect PCI
> > device detail info and “ls /sys/bus/pci/devices//” to
> > detect the PCI device type (PF or VF). FWIW, you might find it
> helpful
> > to refer the implementation of Nova’s xenapi driver for gettiing PCI
> resources [2].
> > Hope it helps.
> >
> >
> >
> > [1]
> > https://github.com/openstack/os-
> xenapi/blob/master/os_xenapi/dom0/etc/
> > xapi.d/plugins/xenhost.py#L593
> >
> > [2]
> >
> https://github.com/openstack/nova/blob/master/nova/virt/xenapi/host.py
> > #L154
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> > *From:*Shunli Zhou [mailto:shunli6...@gmail.com]
> > *Sent:* September-17-17 9:35 PM
> > *To:* openstack-dev@lists.openstack.org
> > *Subject:* [openstack-dev] [zun][unit test] Any python utils can
> > collect pci info?
> >
> >
> >
> > Hi all,
> >
> >
> >
> > For
> > https://blueprints.launchpad.net/zun/+spec/support-
> pcipassthroughfilte
> > r this BP, Nova use the libvirt to collect the PCI device info. But
> > for zun, libvirt seems is a heavy dependecies. Is there a python
> utils
> > that can be used to collect the PCI device detail info? Such as the
> > whether it's a PF of network pci device of VF, the device
> > capabilities, etc.
> >
> >
> >
> > Note: For 'lspci -D -nnmm' , there are some info can not get.
> >
> >
> >
> >
> >
> > Thanks
> >
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Meeting Cancelled 9/21

2017-09-20 Thread Amy Marrich
Just wanted to let everyone know we will be cancelling the IRC meeting on
9/21

Thanks,

Amy (spotz)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Marking <= mitaka EOL

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 11:23:07AM -0400, Tony Breeds wrote:
> On Wed, Sep 20, 2017 at 08:39:56PM +1000, Joshua Hesketh wrote:
> > Hi All,
> > 
> > I've processed the list that Tony sent through this morning, removing the
> > branches and tagging their positions as described.
> 
> Thanks Josh!
>  
> > The only exception being that openstack/zaqar doesn't have stable/liberty
> > or stable/liberty2 branches to EOL.
> 
> Hmm interesting.  I'm not sure how they appeared in the list if they're
> already EOLd.  That makes me think that either my code is wrong or my
> local mirror is.
> 
> /me investigates.

Ahh yeah my local mirror had bogus branches. 'origin/stable/liberty' and
'origin/stable/liberty2' no idea why I created those (esp. as the
pointed at newton ?)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [ironic-ui] Proposing Anup Navare (anupn) for ironic-ui-core

2017-09-20 Thread Navare, Anup D
Thank you Julia for proposing me for the ironic-ui-core team.

Anup

-Original Message-
From: Julia Kreger [mailto:juliaashleykre...@gmail.com] 
Sent: Tuesday, September 19, 2017 5:16 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [ironic] [ironic-ui] Proposing Anup Navare (anupn) for 
ironic-ui-core

Greetings fellow stackers!

I would like to propose adding Anup Navare to ironic-ui-core. Anup has been 
involved with ironic-ui this past cycle as well as various other areas across 
ironic which gives me further confidence in making this proposal.

I have already informally polled the existing ironic-ui-core members, to which 
everyone has responded positively. If there are no objections, I'll make the 
change in one week.

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][QA][group-based-policy][zaqar][packaging_deb][fuel][networking-*] Marking <= mitaka EOL

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 12:56:07PM +0200, Andreas Jaeger wrote:

> So, for fuel we have stable/7.0 etc - what are the plans for these? Can
> we retire them as well?
> 
> Those are even older AFAIK,

As discussed on IRC, when I started this I needed to start with
something small and simple, so I picked the series based branches.

I do intend to get look at the older numeric stable branches but I doubt
there is enough time for real community consultation befoer the zuulv3
migration.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Marking <= mitaka EOL

2017-09-20 Thread Tony Breeds
On Wed, Sep 20, 2017 at 08:39:56PM +1000, Joshua Hesketh wrote:
> Hi All,
> 
> I've processed the list that Tony sent through this morning, removing the
> branches and tagging their positions as described.

Thanks Josh!
 
> The only exception being that openstack/zaqar doesn't have stable/liberty
> or stable/liberty2 branches to EOL.

Hmm interesting.  I'm not sure how they appeared in the list if they're
already EOLd.  That makes me think that either my code is wrong or my
local mirror is.

/me investigates.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] upcoming spec-related deadlines

2017-09-20 Thread Brian Rosmaita
Hello Glance Community,

We came up with a fairly aggressive roadmap for Queens, particularly
given Glance's current personnel situation.  Thus, it's unlikely that
much more can be added, and thus, we can freeze spec proposals early
to eliminate distractions.  So here are the relevant deadlines:

28 September 2017: Glance Spec Proposal Freeze
All Glance, python-glanceclient, and glance_store specs must be
proposed as patches to the glance-specs repository by 13:00 UTC on
Thursday 28 September 2017 (that is, one hour before the weekly Glance
meeting begins).  While this only allows one week for review and
revisions before the Glance Spec Freeze, you can make sure you have
extra review time by submitting your patch early.

6 October 2017: Glance Spec Freeze
All Glance, python-glanceclient, and glance_store specs must be merged
into the glance-specs repository by 23:59 on Friday 6 October 2017.
This is a necessary but not sufficient condition for inclusion in the
Queens release.

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Queens PTG: Thursday summary

2017-09-20 Thread Brian Rosmaita
Hi Belmiro,

Thanks for the feedback about the "hidden" property.  To push this
along, would you mind reading through this patch and the comments on
it and responding?  Or if you think it's close to a working proposal,
you could grab the text, revise, make Fei Long a co-author, and
propose it for approved/queens, and the community can continue the
discussion there.

cheers,
brian


On Wed, Sep 20, 2017 at 4:58 AM, Belmiro Moreira
 wrote:
> Hi Brian,
>
> as we discussed in the past the image lifecycle has been a problem for us
>
> for a long time.
>
>
>
> However, I have some concerns in adding/maintaining a new project only to
> help the
>
> image discovery.
>
>
>
> At CERN we have a small set of images that we maintain and offer as "public"
> images
>
> to our users. Over the years this list has been growing because new image
> releases.
>
> We keep the old images releases with visibility "public" because old bugs in
> nova
>
> (already fixed) when live/resize/migrate instances and because we have some
> usecases
>
> that the user needs a very old release.
>
>
>
> Discovering the latest image release is hard. So we added an image property
> "recommended"
>
> that we update when a new image release is available. Also, we patched
> horizon to show
>
> the "recommended" images first.
>
>
>
> This helps our users to identify the latest image release but we continue to
> show for
>
> each project the full list of public images + all personal user images. Some
> projects
>
> have an image list of hundreds of images.
>
>
>
> Having a "hidden" property as you are proposing would be great!
>
>
>
> For now, we are planning to solve this problem using/abusing of the
> visibility "community".
>
> Changing the visibility of old images releases to "community" will hide them
> from the default
>
> "image-list" but they will continue discoverable and available.
>
>
>
>
> Belmiro
>
>
> On Tue, Sep 19, 2017 at 8:24 PM, Brian Rosmaita 
> wrote:
>>
>> On Mon, Sep 18, 2017 at 7:47 AM, Belmiro Moreira
>>  wrote:
>> > Hi Brian,
>> > Thanks for the sessions summaries.
>> >
>> > We are really interested in the image lifecycle support.
>> > Can you elaborate how searchlight would help solving this problem?
>>
>> The role we see for searchlight is more on the image discovery end of
>> the problem. The context is that we were trying to think of a small
>> set of image metadata that could uniquely identify a series of images
>> (os_dist, os_version, local_version) so that it would be easy for end
>> users to discover the most recent revision with all the security
>> updates, etc.  For example, you might have:
>>
>> initial release of public image: os_distro=MyOS, os_version=3.2,
>> local_version=1
>> security update to package P1: os_distro=MyOS, os_version=3.2,
>> local_version=2
>> security update to package P2: os_distro=MyOS, os_version=3.2,
>> local_version=4
>>
>> The image_id would be different on each of these, and the operator
>> would prefer that users boot from the most recent.  Suppose an
>> operator also offers a pre-built database image built on each of
>> these, and a pre-built LAMP stack built on each of these, etc.  Each
>> would have the same os_distro and os_version value, so we'd need
>> another field to distinguish them, maybe os_content (values: bare, db,
>> lamp).  But then with the database image, for a particular (os_distro,
>> os_version, os_content) tuple, there might be several different images
>> built for the popular versions of that DB, so we'd need another field
>> for that as well.  So ultimately it looks like you'd need to make a
>> complicated query across several image properties, and searchlight
>> would easily allow you to do that.
>>
>> This still leaves us with the problem of making it simple to locate
>> the most recent version of each series of images, and that would be
>> where something like a 'hidden' property would come in.  It's been
>> proposed before, but was rejected, I think because it didn't cover
>> enough use cases.  But that was pre-searchlight, so introducing a
>> 'hidden' field may be a good move now.  It would be interesting to
>> hear what you think about that.
>>
>>
>> >
>> > thanks,
>> > Belmiro
>> > CERN
>> >
>> > On Fri, Sep 15, 2017 at 4:46 PM, Brian Rosmaita
>> > 
>> > wrote:
>> >>
>> >> For those who couldn't attend, here's a quick synopsis of what was
>> >> discussed yesterday.
>> >>
>> >> Please consult the etherpad for each session for details.  Feel free
>> >> to put questions/comments on the etherpads, and then put an item on
>> >> the agenda for the weekly meeting on Thursday 21 September, and we'll
>> >> continue the discussion.
>> >>
>> >>
>> >> Complexity removal
>> >> --
>> >> https://etherpad.openstack.org/p/glance-queens-ptg-complexity-removal
>> >>
>> >> In terms of a complexity contribution barrier, 

Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Arkady.Kanevsky
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Wednesday, September 20, 2017 8:29 AM
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG 
summary

My thanks again to everyone who attended and contributed to the skip-level 
upgrades track over the first two days of last weeks PTG.
I've included a short summary of our discussions below with a list of agreed 
actions for Queens at the end.

tl;dr s/skip-level/fast-forward/g

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Monday - Define and rename
--

During our first session [1] we briefly discussed the history of the skip-level 
upgrades effort within the community and the various misunderstandings that 
have arisen from previous conversations around this topic at past events.

We agreed that at present the only way to perform upgrades between N and
N+>=2 releases of OpenStack was to upgrade linearly through each major
release, without skipping between the starting and target release of the 
upgrade.

This is contrary to previous discussions on the topic where it had been 
suggested that releases could be skipped if DB migrations for these releases 
were applied in bulk later in the process. As projects within the community 
currently offer no such support for this it was agreed to continue to use the 
supported N to N+1 upgrade jumps, albeit in a minimal, offline way.

The name skip-level upgrades has had an obvious role to play in the confusion 
here and as such the renaming of this effort was discussed at length. Various 
suggestions are listed on the pad but for the time being I'm going to stick 
with the basic `fast-forward upgrades` name (FFOU, OFF, BOFF, FFUD etc were all 
close behind). This removes any notion of releases being skipped and should 
hopefully avoid any further confusion in the future.
 
Support by the projects for offline upgrades was then discussed with a recent 
Ironic issue [2] highlighted as an example where projects have required 
services to run before the upgrade could be considered complete. The additional 
requirement of ensuring both workloads and the data plane remain active during 
the upgrade was also then discussed. It was agreed that both the 
`supports-upgrades` [3] and `supports-accessible-upgrades` [4] tags should be 
updated to reflect these requirements for fast-forward upgrades.

Given the above it was agreed that this new definition of what fast-forward 
upgrades are and the best practices associated with them should be clearly 
documented somewhere. Various operators in the room highlighted that they would 
like to see a high level document outline the steps required to achieve this, 
hopefully written by someone with past experience of running this type of 
upgrade.

I failed to capture the names of the individuals who were interested in helping 
out here. If anyone is interested in helping out here please feel free to add 
your name to the actions either at the end of this mail or at the bottom of the 
pad.

In the afternoon we reviewed the current efforts within the community to 
implement fast-forward upgrades, covering TripleO, Charms (Juju) and 
openstack-ansible. While this was insightful to many in the room there didn't 
appear to be any obvious areas of collaboration outside of sharing best 
practice and defining the high level flow of a fast-forward upgrade.

Tuesday - NFV, SIG and actions
--

Tuesday started with a discussion around NFV considerations with fast-forward 
upgrades. These ranged from the previously mentioned need for the data plane to 
remain active during the upgrade to the restricted nature of upgrades in NFV 
environments in terms of time and number of reboots.

It was highlighted that there are some serious as yet unresolved bugs in Nova 
regarding the live migration of instances using SR-IOV devices.
This currently makes the moving of workloads either prior to or during the 
upgrade particularly difficult.

Rollbacks were also discussed and the need for any best practice documentation 
around fast-forward upgrades to include steps to allow the recovery of 
environments if things fail was also highlighted.

We then revisited an idea from the first day of finding or creating a SIG for 
this effort to call home. It was highlighted that there was a suggestion in the 
packaging room to create a Deployment / Lifecycle SIG.
After speaking with a few individuals later in the week I've taken the action 
to reach out on the openstack-sigs mailing list for further input.

Finally, during a brief discussion on ways we could collaborate and share 
tooling for fast-forward upgrades a new tool to migrate configuration files 
between N to N+>=2 releases was introduced [5]. While interesting it was seen 
as a more generic utility that could also 

Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Arkady.Kanevsky
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Wednesday, September 20, 2017 8:29 AM
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG 
summary

My thanks again to everyone who attended and contributed to the skip-level 
upgrades track over the first two days of last weeks PTG.
I've included a short summary of our discussions below with a list of agreed 
actions for Queens at the end.

tl;dr s/skip-level/fast-forward/g

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Monday - Define and rename
--

During our first session [1] we briefly discussed the history of the skip-level 
upgrades effort within the community and the various misunderstandings that 
have arisen from previous conversations around this topic at past events.

We agreed that at present the only way to perform upgrades between N and
N+>=2 releases of OpenStack was to upgrade linearly through each major
release, without skipping between the starting and target release of the 
upgrade.

This is contrary to previous discussions on the topic where it had been 
suggested that releases could be skipped if DB migrations for these releases 
were applied in bulk later in the process. As projects within the community 
currently offer no such support for this it was agreed to continue to use the 
supported N to N+1 upgrade jumps, albeit in a minimal, offline way.

The name skip-level upgrades has had an obvious role to play in the confusion 
here and as such the renaming of this effort was discussed at length. Various 
suggestions are listed on the pad but for the time being I'm going to stick 
with the basic `fast-forward upgrades` name (FFOU, OFF, BOFF, FFUD etc were all 
close behind). This removes any notion of releases being skipped and should 
hopefully avoid any further confusion in the future.
 
Support by the projects for offline upgrades was then discussed with a recent 
Ironic issue [2] highlighted as an example where projects have required 
services to run before the upgrade could be considered complete. The additional 
requirement of ensuring both workloads and the data plane remain active during 
the upgrade was also then discussed. It was agreed that both the 
`supports-upgrades` [3] and `supports-accessible-upgrades` [4] tags should be 
updated to reflect these requirements for fast-forward upgrades.

Given the above it was agreed that this new definition of what fast-forward 
upgrades are and the best practices associated with them should be clearly 
documented somewhere. Various operators in the room highlighted that they would 
like to see a high level document outline the steps required to achieve this, 
hopefully written by someone with past experience of running this type of 
upgrade.

I failed to capture the names of the individuals who were interested in helping 
out here. If anyone is interested in helping out here please feel free to add 
your name to the actions either at the end of this mail or at the bottom of the 
pad.

In the afternoon we reviewed the current efforts within the community to 
implement fast-forward upgrades, covering TripleO, Charms (Juju) and 
openstack-ansible. While this was insightful to many in the room there didn't 
appear to be any obvious areas of collaboration outside of sharing best 
practice and defining the high level flow of a fast-forward upgrade.

Tuesday - NFV, SIG and actions
--

Tuesday started with a discussion around NFV considerations with fast-forward 
upgrades. These ranged from the previously mentioned need for the data plane to 
remain active during the upgrade to the restricted nature of upgrades in NFV 
environments in terms of time and number of reboots.

It was highlighted that there are some serious as yet unresolved bugs in Nova 
regarding the live migration of instances using SR-IOV devices.
This currently makes the moving of workloads either prior to or during the 
upgrade particularly difficult.

Rollbacks were also discussed and the need for any best practice documentation 
around fast-forward upgrades to include steps to allow the recovery of 
environments if things fail was also highlighted.

We then revisited an idea from the first day of finding or creating a SIG for 
this effort to call home. It was highlighted that there was a suggestion in the 
packaging room to create a Deployment / Lifecycle SIG.
After speaking with a few individuals later in the week I've taken the action 
to reach out on the openstack-sigs mailing list for further input.

Finally, during a brief discussion on ways we could collaborate and share 
tooling for fast-forward upgrades a new tool to migrate configuration files 
between N to N+>=2 releases was introduced [5]. While interesting it was seen 
as a more generic utility that could also 

[openstack-dev] [neutron] MTU native ovs firewall driver

2017-09-20 Thread Ajay Kalambur (akalambu)
Hi
i am using large mtu setting of 9000. with the hybrid firewall driver i see 
large mtu set on both tap devices and the linuxbridges
While i switch from ovs hybrid firewall driver to native ovs firewall driver i 
now see the tap device gets the default 1500 mtu
i have the right setting for mtu in global physnet mtu in neutron.conf and path 
mtu in ml2 conf
Do i need to do anything different to get large mtu working with native ovs 
firewall driver
Ajay



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] PublicCloud WG Meetup at OpenStack Days UK next week

2017-09-20 Thread Tobias Rydberg

Hi folks,

Just a quick heads-up. We have a planning meeting scheduled for the new 
OpenStack Passport Program at the OpenStack Days UK conference in London next 
week.

When: 26th September, 2017 @ 2pm
Where: Etc Venues, Bishopsgate 155, London @ Bishopsgate Room 1.

We're planning to outline the goals of the Passport Program and discuss 
potential technical solutions.

More info is available at: https://openstackdays.uk
Etherpad: https://etherpad.openstack.org/p/MEETUPS-2017-publiccloud-wg

Hope to see you there!

Tobias Rydberg
Public Cloud WG Chair

PS We'll also be getting together for a WG social the evening before the 
conference (Monday 25th September). Time/place TBD. Please drop me an email if 
you'd like to join us!



smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Matt Riedemann

On 9/20/2017 8:33 AM, Alex Xu wrote:
  is there any use-case that people update server's metadata such 
frequently?


If you have automation tooling updating the metadata for whatever reason 
it could be a problem.


This was the reported bug FWIW:

https://bugs.launchpad.net/nova/+bug/1650188

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] PTG Summary

2017-09-20 Thread Jeremy Freudberg
Thanks Telles - we all appreciate your leadership which was conducive to a
very productive PTG.

Another exciting point to highlight is more community visibility and
involvement regarding Sahara's plugins:
1) Plugin upgrades tied to a release milestone visible on the official
release schedule
2) Collecting community and operator feedback about what new services
Sahara should support

Again, thanks to all involved and hope to see you all at next PTG.

On Wed, Sep 20, 2017 at 10:00 AM, Telles Nobrega 
wrote:

> Hello Saharan's and interested folks, this past week we had our PTG at
> Denver and it was a very productive one, with so many discussions and too
> much to do after that. In the next lines I will describe a little of the
> main decisions made by the Sahara team.
>
> Pike Retrospective
> -
> We started our PTG with a retrospective from the previous cycle. Since
> there were only sahara members on the room and we basically walked through
> the good and bads from the cycle.
>
> On the bad side:
>  - Losing too many community members
>  - First PTL cycle was a little overwhelming
>  - We had too much errors not caught by tests
>  - We lost our third-party CI
> On the good side:
>  - Most of our priorities was finished
>  - Even with minimum hands we managed to get plugins updated and deliver a
> good product
>  - We met the documentation in project goal
>
> Documentation
> 
> Alongside the community with the goal of moving documentation into project
> tree we were able to move a lot of our documentation into tree with just
> details left for Queens cycle.
> Also, we plan to do a Documentation day in order to update and improve our
> documentation.
>
> APIv2
> 
> This feature has been around for quite some time and in Pike we had a good
> advance in its implementation. We were able to do most of the work needed
> for its release but there are a few steps left. Our plan is to finish up
> all these steps in Queens and release APIv2 as experimental (at least).
>
> Sahara-CI
> --
> During the Pike cycle we lost our third-party CI. In Queens we need to
> gather resources to deploy a new CI as well as improve our CI in infra with
> vanilla multinode jobs. We plan to introduce some nightly jobs for more
> exhaustive testing with large files.
> We are also working on automating the CI deployment with ansible.
>
> Sahara-files
> 
> Just as Sahara-CI we might be losing our sahara files hosts soon and we
> need to remove all references of it from code (which is almost done) and
> copy the files for a different source.
>
> Specs cleanup and prioritization
> ---
> We took some time to review old specs and do a little cleanup of our
> queues. On that we moved some specs into backlog and added some new as well
> as fixed some typos on older specs.
>
> New features and bugs
> ---
> We discussed new features and bugs and we have a priority list for the
> work we plan to do in Queens and possible for next two cycles. If you are
> interested in a deeper look into planned features and bugs take a look at
> [0]
>
> Community-goals
> 
> For Queens there are two community goals. The first one is related to
> Split tempest plugin and this was done in Sahara a couple cycles ago. The
> second is regarding policy in code which we already have a patch (WIP) up
> [1] and should get it done soon.
>
> Thanks all!
>
> [0] https://etherpad.openstack.org/p/sahara-queens-ptg
> [1] https://review.openstack.org/#/c/503221/
>
>
>
> --
>
> TELLES NOBREGA
>
> SOFTWARE ENGINEER
>
> Red Hat I 
>
> tenob...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] code patches & bugs

2017-09-20 Thread Loo, Ruby
Hi,

Just a reminder to make sure that if you are submitting a code patch to fix 
something that could be apparent to our users, that it should be considered a 
bug (or a feature). Which means that there should be a launchpad bug [1] 
associated with it, and a release note.

I know we've been fairly lax about that in the past and I know that it is yet 
more-stuff-to-do-just-to-fix-something. However, we (I) just encountered a case 
where we are backporting a patch that did not originally have a bug associated 
with it nor a release note :-(

Thanks!
--ruby

[1] https://bugs.launchpad.net/ironic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Deploy Keystone, Glance and Mariadb in Kubernetes with TripleO

2017-09-20 Thread Flavio Percoco

Hey Folks,

I just posted a screencast sharing the progress I've made on the kubernetes
effort[0]. I've pasted part of the blog post below in case you want to discuss
some parts of it.

What if I want to play with it?
===

Here's a small recap of what's needed to play with this PoC. Before you do,
though, bear in mind that this work is in its very early days and that there are
*many* things that don't work or that could be better. As usual, any kind of
feedback and/or contribution are welcome. Note that some of the steps below
require root access

1# Clone the tripleo-apbs repository and its submodules:

   git clone --recursive https://github.com/tripleo-apb/tripleo-apbs

2# Build the images you want to run:

   ./build.sh mariadb
   ./build.sh glance
   ./build.sh keystone

3# Clone the `undercloud_containers` repo and run the `doit.sh` script. This
repo is meant to be used only for development purposes:

   git clone https://github.com/flaper87/undercloud_containers

4# Prepare the environment

cd undercloud_containers && ./doit.sh

5# Deploy the undercloud (as root)

   cd $HOME && ./run.sh

The `doit.sh` scripts uses my fork of tripleo-heat-templates, which contains the
changes to use the APBs. It's important to highlight that this fork doesn't
introduce changes to the existing API. You can see the comparison between the
fork and the main tripleo-heat-template's repo[1]:

[0] http://blog.flaper87.com/glance-keystone-mariadb-on-k8s-with-tripleo.html
[1] 
https://github.com/openstack/tripleo-heat-templates/compare/master...flaper87:tht-apbs

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] pypy broken for many repos

2017-09-20 Thread Andreas Jaeger
On 2017-09-18 22:44, Sean McGinnis wrote:
> On Sun, Sep 17, 2017 at 08:32:00AM +0200, Andreas Jaeger wrote:
>> Currently we use pypy for a couple of projects and many of these fail
>> with the version of pypy that we use.
>>
>> A common error is  "Pypy fails with "RuntimeError: cryptography 1.9 is
>> not compatible with PyPy < 5.3. Please upgrade PyPy to use this library.".
>>
>> Example:
>> http://logs.openstack.org/51/503951/1/check/gate-python-neutronclient-pypy/206ac6a/
>>
>> I propose in https://review.openstack.org/#/c/504748/ to remove pypy
>> from those repos where it fails.
>>
>> Alternative would be investigating what is broken and fix it. Anybody
>> interested to do this?
>>
>> Or should we remove the pypy jobs where they fail. I pushed
>> https://review.openstack.org/504748 up and marked it as WIP, will wait
>> for a week to see outcome of this discussion,
>>
>> Andreas
> 
> I noticed this when we switched over to using cryptography. I think at the 
> time
> the consensus was - meh. IIRC, it's an issue that we use an older version of
> pypy. If system packages are available for a newer version, it probably would
> be good to test that. But I have never seen pypy use in the wild, so I'm not
> sure if it would be worth the effort.

Also, depends on anybody willing to do it.

> Maybe easier just declaring pypy unsupported for service projects?

How do we do this in the best way?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] PTG Summary

2017-09-20 Thread Telles Nobrega
Hello Saharan's and interested folks, this past week we had our PTG at
Denver and it was a very productive one, with so many discussions and too
much to do after that. In the next lines I will describe a little of the
main decisions made by the Sahara team.

Pike Retrospective
-
We started our PTG with a retrospective from the previous cycle. Since
there were only sahara members on the room and we basically walked through
the good and bads from the cycle.

On the bad side:
 - Losing too many community members
 - First PTL cycle was a little overwhelming
 - We had too much errors not caught by tests
 - We lost our third-party CI
On the good side:
 - Most of our priorities was finished
 - Even with minimum hands we managed to get plugins updated and deliver a
good product
 - We met the documentation in project goal

Documentation

Alongside the community with the goal of moving documentation into project
tree we were able to move a lot of our documentation into tree with just
details left for Queens cycle.
Also, we plan to do a Documentation day in order to update and improve our
documentation.

APIv2

This feature has been around for quite some time and in Pike we had a good
advance in its implementation. We were able to do most of the work needed
for its release but there are a few steps left. Our plan is to finish up
all these steps in Queens and release APIv2 as experimental (at least).

Sahara-CI
--
During the Pike cycle we lost our third-party CI. In Queens we need to
gather resources to deploy a new CI as well as improve our CI in infra with
vanilla multinode jobs. We plan to introduce some nightly jobs for more
exhaustive testing with large files.
We are also working on automating the CI deployment with ansible.

Sahara-files

Just as Sahara-CI we might be losing our sahara files hosts soon and we
need to remove all references of it from code (which is almost done) and
copy the files for a different source.

Specs cleanup and prioritization
---
We took some time to review old specs and do a little cleanup of our
queues. On that we moved some specs into backlog and added some new as well
as fixed some typos on older specs.

New features and bugs
---
We discussed new features and bugs and we have a priority list for the work
we plan to do in Queens and possible for next two cycles. If you are
interested in a deeper look into planned features and bugs take a look at
[0]

Community-goals

For Queens there are two community goals. The first one is related to Split
tempest plugin and this was done in Sahara a couple cycles ago. The second
is regarding policy in code which we already have a patch (WIP) up [1] and
should get it done soon.

Thanks all!

[0] https://etherpad.openstack.org/p/sahara-queens-ptg
[1] https://review.openstack.org/#/c/503221/



-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat I 

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread Flavio Percoco

On 20/09/17 12:21 +, Jeremy Stanley wrote:

On 2017-09-20 07:51:29 -0400 (-0400), Davanum Srinivas wrote:
[...]

please indicate which file from Nova, so if anyone wanted to cross
check for fixes etc can go look in Nova

[...]

While the opportunity has probably passed in this case, the ideal
method is to start with a Git fork of the original as your seed
project (perhaps with history pruned to just the files you're
reusing via git filter-branch or similar). This way the complete
change history of the files in question is preserved for future
inspection.


If it's not too late, I would definitely recommend going with a fork, fwiw.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
[...]
> The image building was the good old working solution and unless
> the image build become a super expensive thing, this is still the
> best option.
[...]

It became a super expensive thing, and that's the main reason we
stopped doing it. Now that Nodepool has grown support for
distributed/parallel image building and uploading, the cost model
may have changed a bit in that regard so I agree it doesn't hurt to
revisit that decision. Nevertheless it will take a fair amount of
convincing that the savings balances out the costs (not just in
resource consumption but also administrative overhead and community
impact... if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Ghanshyam Mann
On Wed, Sep 20, 2017 at 10:14 PM, Matt Riedemann  wrote:
> On 9/20/2017 12:48 AM, Chen CH Ji wrote:
>>
>> in analyzing other code, found seems we don't need PUT
>> /servers/{server_id}/metadata/{key} ?
>>
>> as the id is only used for check whether it's in the body and we will
>> honor the whole body (body['meta'] in the code)
>>
>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/server_metadata.py#L80
>>
>> looks like it's identical to
>> PUT /servers/{server_id}/metadata
>>
>> why we need this API or it should be something like
>>
>> PUT /servers/{server_id}/metadata/{key}but we only accept a value to
>> modify the meta given by {key} in the API side?
>>
>> Best Regards!
>>
>> Kevin (Chen) Ji 纪 晨
>>
>> Engineer, zVM Development, CSTL
>> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
>> Phone: +86-10-82451493
>> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
>> Beijing 100193, PRC
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> This API is a bit confusing, and the code is too since it all goes down to
> some common code, and I think you're missing the 'delete' flag:
>
> https://github.com/openstack/nova/blob/5bf1bb47c7e17c26592a699d07c2faa59d98bfb8/nova/compute/api.py#L3830
>
> If delete=False, as it is in this case, we only add/update the existing
> metadata with the new metadata from the request body. If delete=True, then
> we overwrite the instance metadata with whatever is in the request.
>
> Does that answer your question?
>
> This API is problematic and we have bugs against it since it's not atomic,
> i.e. two concurrent requests will overwrite one of them. We should really
> have a generation ID or etag on this data to be sure it's atomically
> updated.
>

I think confusion is for updating the single metadata item by key [1].
and whether it has bug to allow update more than 1 item. But It does
not as schema restrict request body to allow only 1 item in body.

Which means update metadata item by key only allow to update that key
value only.

Also we do have tests to verify that:
https://github.com/openstack/nova/blob/5bf1bb47c7e17c26592a699d07c2faa59d98bfb8/nova/tests/unit/api/openstack/compute/test_server_metadata.py#L529


..1 
https://developer.openstack.org/api-ref/compute/#create-or-update-metadata-item

> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 08:41:14 -0400 (-0400), Doug Hellmann wrote:
[...]
> Is there any reason not to use the published files for all regular
> builds, instead of having tox.ini point to a git URL? Maybe only for
> regular builds off of stable branches?

I'm not sure what you mean by "regular builds" but the plan as I
understood it was to switch from git.o.o URLs to releases.o.o URLs
in the tox.ini files in all branches of projects already consuming
constraints files that way.

As early as now (if we already had the publication job in place) we
could update them all in master branches to retrieve something like
https://releases.openstack.org/constraints/queens-upper-constraints.txt
and then once stable/queens is branched in all repos (including the
requirements repo), switch the job to begin publishing to a URL for
rocky and push tox.ini updates to master branches switching the URL
to that as early in the cycle as possible. Alternatively, we could
publish master and queens copies (identical initially) and expect
the master branch tox.ini files to refer to master but then switch
them to queens on the stable/queens branch during RC. It just comes
down to which the stable/requirements/release teams think makes the
most sense from a procedural perspective.

Remember also that the timing on this doesn't require extreme
precision and there are no chicken-and-egg/catch-22 problems
associated with updating because the URLs in question are merely a
fallback method for when the constraints file is not already
provided locally. In the CI system, we directly provide constraints
files so that we can respect depends-on to requirements repo changes
and the like, so in practice this fallback is primarily for the
convenience of developers running tox locally.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Alex Xu
2017-09-20 21:14 GMT+08:00 Matt Riedemann :

> On 9/20/2017 12:48 AM, Chen CH Ji wrote:
>
>> in analyzing other code, found seems we don't need PUT
>> /servers/{server_id}/metadata/{key} ?
>>
>> as the id is only used for check whether it's in the body and we will
>> honor the whole body (body['meta'] in the code)
>> https://github.com/openstack/nova/blob/master/nova/api/opens
>> tack/compute/server_metadata.py#L80
>>
>> looks like it's identical to
>> PUT /servers/{server_id}/metadata
>>
>> why we need this API or it should be something like
>>
>> PUT /servers/{server_id}/metadata/{key}but we only accept a value to
>> modify the meta given by {key} in the API side?
>>
>> Best Regards!
>>
>> Kevin (Chen) Ji 纪 晨
>>
>> Engineer, zVM Development, CSTL
>> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
>> Phone: +86-10-82451493
>> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
>> Beijing 100193, PRC
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> This API is a bit confusing, and the code is too since it all goes down to
> some common code, and I think you're missing the 'delete' flag:
>
> https://github.com/openstack/nova/blob/5bf1bb47c7e17c26592a6
> 99d07c2faa59d98bfb8/nova/compute/api.py#L3830
>
> If delete=False, as it is in this case, we only add/update the existing
> metadata with the new metadata from the request body. If delete=True, then
> we overwrite the instance metadata with whatever is in the request.
>
> Does that answer your question?
>
> This API is problematic and we have bugs against it since it's not atomic,
> i.e. two concurrent requests will overwrite one of them. We should really
> have a generation ID or etag on this data to be sure it's atomically
> updated.


 is there any use-case that people update server's metadata such frequently?


>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-20 Thread Lee Yarwood
My thanks again to everyone who attended and contributed to the
skip-level upgrades track over the first two days of last weeks PTG.
I've included a short summary of our discussions below with a list of
agreed actions for Queens at the end.

tl;dr s/skip-level/fast-forward/g

https://etherpad.openstack.org/p/queens-PTG-skip-level-upgrades

Monday - Define and rename
--

During our first session [1] we briefly discussed the history of the
skip-level upgrades effort within the community and the various
misunderstandings that have arisen from previous conversations around
this topic at past events.

We agreed that at present the only way to perform upgrades between N and
N+>=2 releases of OpenStack was to upgrade linearly through each major
release, without skipping between the starting and target release of the
upgrade.

This is contrary to previous discussions on the topic where it had been
suggested that releases could be skipped if DB migrations for these
releases were applied in bulk later in the process. As projects within
the community currently offer no such support for this it was agreed to
continue to use the supported N to N+1 upgrade jumps, albeit in a
minimal, offline way.

The name skip-level upgrades has had an obvious role to play in the
confusion here and as such the renaming of this effort was discussed at
length. Various suggestions are listed on the pad but for the time being
I'm going to stick with the basic `fast-forward upgrades` name (FFOU,
OFF, BOFF, FFUD etc were all close behind). This removes any notion of
releases being skipped and should hopefully avoid any further confusion
in the future.
 
Support by the projects for offline upgrades was then discussed with a
recent Ironic issue [2] highlighted as an example where projects have
required services to run before the upgrade could be considered
complete. The additional requirement of ensuring both workloads and the
data plane remain active during the upgrade was also then discussed. It
was agreed that both the `supports-upgrades` [3] and
`supports-accessible-upgrades` [4] tags should be updated to reflect
these requirements for fast-forward upgrades.

Given the above it was agreed that this new definition of what
fast-forward upgrades are and the best practices associated with them
should be clearly documented somewhere. Various operators in the room
highlighted that they would like to see a high level document outline
the steps required to achieve this, hopefully written by someone with
past experience of running this type of upgrade.

I failed to capture the names of the individuals who were interested in
helping out here. If anyone is interested in helping out here please
feel free to add your name to the actions either at the end of this mail
or at the bottom of the pad.

In the afternoon we reviewed the current efforts within the community to
implement fast-forward upgrades, covering TripleO, Charms (Juju) and
openstack-ansible. While this was insightful to many in the room there
didn't appear to be any obvious areas of collaboration outside of
sharing best practice and defining the high level flow of a fast-forward
upgrade.

Tuesday - NFV, SIG and actions
--

Tuesday started with a discussion around NFV considerations with
fast-forward upgrades. These ranged from the previously mentioned need
for the data plane to remain active during the upgrade to the restricted
nature of upgrades in NFV environments in terms of time and number of
reboots.

It was highlighted that there are some serious as yet unresolved bugs in
Nova regarding the live migration of instances using SR-IOV devices.
This currently makes the moving of workloads either prior to or during
the upgrade particularly difficult.

Rollbacks were also discussed and the need for any best practice
documentation around fast-forward upgrades to include steps to allow the
recovery of environments if things fail was also highlighted.

We then revisited an idea from the first day of finding or creating a
SIG for this effort to call home. It was highlighted that there was a
suggestion in the packaging room to create a Deployment / Lifecycle SIG.
After speaking with a few individuals later in the week I've taken the
action to reach out on the openstack-sigs mailing list for further
input.

Finally, during a brief discussion on ways we could collaborate and share
tooling for fast-forward upgrades a new tool to migrate configuration
files between N to N+>=2 releases was introduced [5]. While interesting
it was seen as a more generic utility that could also be used between N
to N+1 upgrades.  AFAIK the authors joined the Oslo room shortly after
this session ended to gain more feedback from that team.

Actions
---

- Modify the `supports-upgrades`[3] and `supports-accessible-upgrades`[4] tags

  I have yet to look into the formal process around making changes to
  these tags but I will aim to make a start ASAP. 

- Find an Ops 

Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-20 Thread Attila Fazekas
On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand  wrote:

> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and we can put -dsvm- in the jobs names to indicate it should run
> on these nodes :)
>
> Older hands than myself will remember even more issues, but the
> "thicker" the base-image has been has traditionally just lead to a lot
> more corners for corner-cases can hide in.  We saw this all the time
> with "snapshot" images where we'd be based on upstream images that
> would change ever so slightly and break things, leading to
> diskimage-builder and the -minimal build approach.
>
> That said, in a zuulv3 world where we are not caching all git and have
> considerably smaller images, a nodepool that has a scheduler that
> accounts for flavor sizes and could conceivably understand similar for
> images, and where we're building with discrete elements that could
> "bolt-on" things like a list-of-packages install sanely to daily
> builds ... it's not impossible to imagine.
>
> -i


The problem is these package install steps are not really I/O bottle-necked
in most cases,
even with a regular DSL speed you can  frequently see
the decompress and the post config steps takes more time.

The site-local cache/mirror has visible benefit, but does not eliminates
the issues.

The main enemy is the single threaded CPU intensive operation in most
install/config related script,
the 2th most common issue is serially requesting high latency steps, which
does not reaches neither
the CPU or I/O possibilities at the end.

The fat images are generally cheaper even if your cloud has only 1Gb
Ethernet for image transfer.
You gain more by baking the packages into the image than the 1GbE can steal
from you, because
you also save time what would be loosen on CPU intensive operations or from
random disk access.

It is safe to add all distro packages used  by devstack to the cloud image.

Historically we had issues with some base image packages which presence
changed the
behavior of some component ,for example firewalld vs. libvirt (likely an
already solved issue),
these packages got explicitly removed by devstack in case of necessary.
Those packages not requested by devstack !

Fedora/Centos also has/had issues with overlapping with pypi packages on
main filesystem,
(too long story, pointing fingers ..) , generally not a good idea to add
packages from pypi to
an image which content might be overridden by the distro's package manager.

The distribution package install time delays the gate response,
when the slowest ruining job delayed by this, than the whole response is
delayed.

It Is an user facing latency issue, which should be solved even if the cost
would be higher.

The image building was the good old working solution and unless the image
build
become a super expensive thing, this is still the best option.

site-local mirror also expected to help making the image build step(s)
faster and safer.

The other option is the ready scripts.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Matt Riedemann

On 9/20/2017 12:48 AM, Chen CH Ji wrote:
in analyzing other code, found seems we don't need PUT 
/servers/{server_id}/metadata/{key} ?


as the id is only used for check whether it's in the body and we will 
honor the whole body (body['meta'] in the code)

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/server_metadata.py#L80

looks like it's identical to
PUT /servers/{server_id}/metadata

why we need this API or it should be something like

PUT /servers/{server_id}/metadata/{key}but we only accept a value to 
modify the meta given by {key} in the API side?


Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



This API is a bit confusing, and the code is too since it all goes down 
to some common code, and I think you're missing the 'delete' flag:


https://github.com/openstack/nova/blob/5bf1bb47c7e17c26592a699d07c2faa59d98bfb8/nova/compute/api.py#L3830

If delete=False, as it is in this case, we only add/update the existing 
metadata with the new metadata from the request body. If delete=True, 
then we overwrite the instance metadata with whatever is in the request.


Does that answer your question?

This API is problematic and we have bugs against it since it's not 
atomic, i.e. two concurrent requests will overwrite one of them. We 
should really have a generation ID or etag on this data to be sure it's 
atomically updated.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] why need PUT /servers/{server_id}/metadata/{key} ?

2017-09-20 Thread Ghanshyam Mann
On Wed, Sep 20, 2017 at 2:48 PM, Chen CH Ji  wrote:
> in analyzing other code, found seems we don't need PUT
> /servers/{server_id}/metadata/{key} ?
>
> as the id is only used for check whether it's in the body and we will honor
> the whole body (body['meta'] in the code)
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/server_metadata.py#L80

Schema restrict to 1 item in body
- 
https://github.com/openstack/nova/blob/5bf1bb47c7e17c26592a699d07c2faa59d98bfb8/nova/api/openstack/compute/schemas/server_metadata.py#L32

User will not be able to pass more than 1 meta item in this API. Does
not it work that way or you are able to pass more than 1 key in body ?


>
> looks like it's identical to
> PUT /servers/{server_id}/metadata
>
> why we need this API or it should be something like
>
> PUT /servers/{server_id}/metadata/{key} but we only accept a value to modify
> the meta given by {key} in the API side?
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About live-resize spec

2017-09-20 Thread Matt Riedemann

On 9/20/2017 12:16 AM, Chen CH Ji wrote:
spec [1] has been there since 2014 and some patches proposed but 
abandoned after that, can someone
please provide some info/background about why it's postponed or due to 
some limitations that nova hasn't been implemented yet?


some operators suggested that this is a valuable funcationality so 
better to have it in the near feature... thanks



[1]:https://blueprints.launchpad.net/nova/+spec/instance-live-resize

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian 
District, Beijing 100193, PRC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this during the newton midcycle and from what I remember 
we wanted to make this depend on having the ability for users to know 
what they are capable of doing with their server instance in any given 
cloud. This has grown into the cross project capabilities API 
discussions that happen with the API work group.


At this point, I don't think we have anyone working on doing anything 
for a capabilities API in nova, nor do we have cross project agreement 
on a perfect solution that will work for all projects. At the PTG in 
Denver I think we just said we care less about having a perfect 
guideline for all projects to have a consistent API, and more about 
actually documenting the APIs that each project does have, which we do a 
pretty good job of in Nova.


So I think live resize would be fine to pick up again if you're just 
resizing CPU/RAM from the flavor and if we provide a policy rule to 
disable it in clouds that don't want to expose that feature.


Cloudbase was originally driving it for Hyper-v so you might want to 
talk with Claudiu Belu.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][ptls][install] Install guide vs. tutorial

2017-09-20 Thread Jay S Bryant



On 9/19/2017 4:57 PM, Sean McGinnis wrote:

On 9/19/17, 2:43 PM, "Eric Fried"  wrote:

 Alex-
 
 	Regardless of what the dictionary might say, people associate the word

 "Tutorial" with a set of step-by-step instructions to do a thing.
 "Guide" would be a more general term.
 
 	I think of a "Tutorial" as being a *single* representative path through

 a process.  A "Guide" could supply different alternatives.
 
 	I expect a "Tutorial" to get me from start to finish.  A "Guide" might

 help me along the way, but could be sparser.
 
 	In summary, I believe the word "Tutorial" implies a very specific

 thing, so we should use it if and only if the doc is exactly that.

I don't think we'll get consensus on this, as my association with those words
do not match Eric's. :)

For me, a tutorial is something that teaches. So after I've gone through a
tutorial I would expect to have learned how installs work and would just know
these things (with an occasional need to reference a few points) going forward.

A guide to me is something that I know I will use whenever I need to do
something. So for me, having an installation guide is what I would expect
from this as every time I need to do a package based install, I am going to pull
up the guide to go through it.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Interesting.

So Sean has the opposite impression from Eric and I.  Yeah, that does 
make it seem like reaching a consensus will be difficult.


At that point I think consistency becomes the most important thing.

Jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [notification] not transforming HostAPI related versioned notifications

2017-09-20 Thread Matt Riedemann

On 9/20/2017 4:06 AM, Balazs Gibizer wrote:
Do you feel we need something more user facing documentations about 
these decisions?


No, this is fine. Thanks for explaining.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][ptls][install] Install guide vs. tutorial

2017-09-20 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2017-09-19 16:57:40 -0500:
> > > On 9/19/17, 2:43 PM, "Eric Fried"  wrote:
> > >
> > > Alex-
> > > 
> > > Regardless of what the dictionary might say, people associate the 
> > > word
> > > "Tutorial" with a set of step-by-step instructions to do a thing.
> > > "Guide" would be a more general term.
> > > 
> > > I think of a "Tutorial" as being a *single* representative path 
> > > through
> > > a process.  A "Guide" could supply different alternatives.
> > > 
> > > I expect a "Tutorial" to get me from start to finish.  A "Guide" 
> > > might
> > > help me along the way, but could be sparser.
> > > 
> > > In summary, I believe the word "Tutorial" implies a very specific
> > > thing, so we should use it if and only if the doc is exactly that.
> 
> I don't think we'll get consensus on this, as my association with those words
> do not match Eric's. :)
> 
> For me, a tutorial is something that teaches. So after I've gone through a
> tutorial I would expect to have learned how installs work and would just know
> these things (with an occasional need to reference a few points) going 
> forward.
> 
> A guide to me is something that I know I will use whenever I need to do
> something. So for me, having an installation guide is what I would expect
> from this as every time I need to do a package based install, I am going to 
> pull
> up the guide to go through it.
> 
> Sean
> 

One of the other distinctions I remember being made when this came
up last week was that the documentation we have about installation
only includes information about one of many possible ways to install
the components that it covers.

What do other projects call the document that explains similar
information? Is there some sort of general consensus in the rest of the
open source community about what term to use?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-09-20 12:30:59 +:
> On 2017-09-20 07:11:51 -0500 (-0500), Matthew Thode wrote:
> > On 17-09-20 13:35:44, Michal Pryc wrote:
> > > EOL releases for the nova component (checked neutron as well,
> > > possibly many other components) have wrong pointers to the
> > > upper-constraints.txt files as they are referencing
> > > stable/branch rather then branch-eol
> [...]
> > I think this is an error in our process that should be fixed
> > (newton is going EOL soon).
> [...]
> 
> As you may recall, at the PTG we also discussed a solution to this
> problem (under the auspices of solving the reverse scenario during
> the RC period for upcoming releases): specifically, publishing
> branch series constraints files to the releases site. EOL'd projects
> can refer to those at a static URL indefinitely rather than being at
> the mercy of branch/tag changes in the Git repository.

Is there any reason not to use the published files for all regular
builds, instead of having tox.ini point to a git URL? Maybe only for
regular builds off of stable branches?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 07:11:51 -0500 (-0500), Matthew Thode wrote:
> On 17-09-20 13:35:44, Michal Pryc wrote:
> > EOL releases for the nova component (checked neutron as well,
> > possibly many other components) have wrong pointers to the
> > upper-constraints.txt files as they are referencing
> > stable/branch rather then branch-eol
[...]
> I think this is an error in our process that should be fixed
> (newton is going EOL soon).
[...]

As you may recall, at the PTG we also discussed a solution to this
problem (under the auspices of solving the reverse scenario during
the RC period for upcoming releases): specifically, publishing
branch series constraints files to the releases site. EOL'd projects
can refer to those at a static URL indefinitely rather than being at
the mercy of branch/tag changes in the Git repository.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 07:51:29 -0400 (-0400), Davanum Srinivas wrote:
[...]
> please indicate which file from Nova, so if anyone wanted to cross
> check for fixes etc can go look in Nova
[...]

While the opportunity has probably passed in this case, the ideal
method is to start with a Git fork of the original as your seed
project (perhaps with history pruned to just the files you're
reusing via git filter-branch or similar). This way the complete
change history of the files in question is preserved for future
inspection.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][stable] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Matthew Thode
On 17-09-20 13:35:44, Michal Pryc wrote:
> Hi,
> 
> EOL releases for the nova component (checked neutron as well, possibly many
> other components) have wrong pointers to the upper-constraints.txt files as
> they are referencing stable/branch rather then branch-eol, see example:
> 
> https://github.com/openstack/nova/blob/liberty-eol/tox.ini#L12
> 
> Line:
> https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/liberty
> 
> Should be:
> https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=liberty-eol
> 
> EOL means there should be no more changes to the release, but in this case
> process of EOL'ing made regression as now it's impossible to run tests
> against those tags (this applies to liberty/mitaka and in the future newer
> releases).
> 
> Should this be fixed somehow or EOL isn't really meant to be touched and
> fixed and those will be left broken ?
> 
> -- 
> best
> Michal Pryc

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think this is an error in our process that should be fixed (newton is
going EOL soon).  While it'd be nice to fix I don't think it's worth the
effort as an EOL'd and now unbranched project.  We could create a
temporary branch from the commit and retag, but as a rule we should not
use the same tag twice (removing the pointer that existed and reusing it
to point to a diferent commit).

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] PTG summary

2017-09-20 Thread James Page
Hi All

Here’s a summary of the charm related discussion from PTG last week.

# Cross Project Discussions

## Skip Level Upgrades

This topic was discussed at the start of the week, in the context of
supporting upgrades across multiple OpenStack releases for operators.  What
was immediately evident was this was really a discussion around
‘fast-forward’ upgrades, rather than actually skipping any specific
OpenStack series as part of a cloud upgrade.  Deployments would still need
to step through each OpenStack release series in turn, so the discussion
centred around how to make this much easier for operators and deployment
tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to
update a service between series should be supported whilst the service is
offline – i.e. all database migrations can be completed without the
services actually running;  This would allow multiple upgrade steps to be
completed without having to start services up on interim steps. Note that a
lot of projects all ready support this approach, but its never been agreed
as a general policy as part of the ‘supports-upgrade‘ tag which was one of
the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along
these lines for minimising the amount of service disruption in the control
plane during OpenStack upgrades; with implementation of this approach
across all projects, we can avoid having to start up services on each
series step as we do today, further optimising the upgrade process
delivered by the charms for services that don’t support rolling upgrades.

## Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the
policy for role based access into API endpoints – for example, what
operations require admin level permissions for the cloud. Moving all policy
default definitions to code rather than in a configuration file is a goal
for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm
based deployment much easier, as we only have to manage the delta on top of
the defaults, rather than having to manage the entire policy file for each
OpenStack release.  Notably Nova and Keystone have already moved to this
approach during previous development cycles.

## Deployment (SIG)

During the first two days, some cross deployment tool discussions where
held for a variety of topics; of specific interest for the OpenStack Charms
was the discussion around health/status middleware for projects so that the
general health of a service can be assessed via its API – this would cover
in-depth checks such as access to database and messaging resources, as well
as access to other services that the checked service might depend on – for
example, can Nova access Keystone’s API for authentication of tokens etc.
There was general agreement that this was a good idea, and it will be
proposed as a community goal for the OpenStack project.

# OpenStack Charms Devroom

## Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time;
The Keystone v2 API is officially deprecated, so we had discussion around
approach for switching the default API deployed by the charms going
forwards; in summary

New deployments should default to the v3 API and associated policy
definitions
Existing deployments that get upgraded to newer charm releases should not
switch automatically to v3, limiting the impact of services built around v2
based deployments already in production.
The charms already support switching from v2 to v3, so v2 deployments can
upgrade as and when they are ready todo so.
At some point in time, we’ll have to automatically switch v2 deployments to
v3 on OpenStack series upgrade, but that does not have to happen yet.

## Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped
from Keystone); The preferred format is now Fernet so we should implement
this in the charms – we should be able to leverage the existing PKI key
management code to an extent to support Fernet tokens.

## Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the
current development focus in the master branch, and the most recent stable
branch – which right now is stable/17.08.  At the point of the next
release, the stable/17.08 branch is no longer maintained, being superseded
by the new stable/XX.XX branch.  This is reflected in the promulgated
charms in the Juju charm store as well.  Older versions of charms remain
consumable (albeit there appears to be some trimming of older revisions
which needs investigating). If a bug is discovered in a charm version from
a inactive stable branch, the only course of action is to upgrade the the
latest stable version for fixes, which may also include new features and
behavioural changes.

There are some technical challenges with regard 

Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread Davanum Srinivas
Zhenguo,

Thanks for bringing this up.

For #1, yes please indicate which file from Nova, so if anyone wanted
to cross check for fixes etc can go look in Nova
For #2, When you pick up a commit from Nova, please make sure the
commit message in Mogan has the following
   * The gerrit change id(s) of the original commit, so folks can
easily go find the original commit in gerritt
   * Add "Co-Authored-By:" tags for each author in the original commit
so they get credit

Also, Please make sure you do not alter any copyright or license
related information in the header when you first copy a file from
another project.

Thanks,
Dims

On Wed, Sep 20, 2017 at 4:20 AM, Zhenguo Niu  wrote:
> Hi all,
>
> I'm from Mogan team, we copied some codes/frameworks from Nova since we want
> to be a Nova with a bare metal specific API.
> About why reinventing the wheel, you can find more informations here [1].
>
> I would like to know what's the decent way to show our respect to the
> original authors we copied from.
>
> After discussing with the team, we plan to do some improvements as below:
>
> 1. Adds some comments to the beginning of such files to indicate that they
> leveraged the implementation of Nova.
>
> https://github.com/openstack/mogan/blob/master/mogan/baremetal/ironic/driver.py#L19
> https://github.com/openstack/mogan/blob/master/mogan/console/websocketproxy.py#L17-L18
> https://github.com/openstack/mogan/blob/master/mogan/consoleauth/manager.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/engine/configdrive.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/engine/metadata.py#L18
> https://github.com/openstack/mogan/blob/master/mogan/network/api.py#L18
> https://github.com/openstack/mogan/blob/master/mogan/objects/aggregate.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/objects/keypair.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/objects/server_fault.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/objects/server_group.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/scheduler/client/report.py#L17
> https://github.com/openstack/mogan/blob/master/mogan/scheduler/filter_scheduler.py#L17
>
> 2. For the changes we follows what nova changed, should reference to the
> original authors in the commit messages.
>
>
> Please let me know if there are something else we need to do or there are
> already some existing principles we can follow, thanks!
>
>
>
> [1] https://wiki.openstack.org/wiki/Mogan
>
>
> --
> Best Regards,
> Zhenguo Niu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] EOL tags and upper-constraints.txt in tox.ini

2017-09-20 Thread Michal Pryc
Hi,

EOL releases for the nova component (checked neutron as well, possibly many
other components) have wrong pointers to the upper-constraints.txt files as
they are referencing stable/branch rather then branch-eol, see example:

https://github.com/openstack/nova/blob/liberty-eol/tox.ini#L12

Line:
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/liberty

Should be:
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=liberty-eol

EOL means there should be no more changes to the release, but in this case
process of EOL'ing made regression as now it's impossible to run tests
against those tags (this applies to liberty/mitaka and in the future newer
releases).

Should this be fixed somehow or EOL isn't really meant to be touched and
fixed and those will be left broken ?

-- 
best
Michal Pryc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] review.openstack.org downtime and Gerrit upgrade TODAY 15:00 UTC - 23:59 UTC

2017-09-20 Thread Andrea Frittoli
On Wed, Sep 20, 2017 at 12:00 PM Paul Bourke  wrote:

> I had the following bookmark:
>
> status:open is:mergeable (project:openstack/kolla OR
> project:openstack/kolla-ansible) NOT label:Code-Review>=-2,self NOT
> owner:pauldbourke NOT age:1month branch:master
>
> Which basically means, find changes that are:
>
> * open
> * in kolla or kolla-ansible
> * not reviewed by me
> * not owned by me
> * not older than a month
> * on the master branch
>
> This seems to no longer work unless I remove the
> 'label:Code-Review>=-2,self'.
>
> Anyone else having similar issues?
>

See Sean's message and patch
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122281.html

Andrea Frittoli (andreaf)


>
> On 19/09/17 00:58, Clark Boylan wrote:
> > On Mon, Sep 18, 2017, at 06:43 AM, Andreas Jaeger wrote:
> >> Just a friendly reminder that the upgrade will happen TODAY, Monday
> >> 18th, starting at 15:00 UTC. The infra team expects that it takes 8
> >> hours, so until 2359 UTC.
> >
> > This work was functionally completed at 23:43 UTC. We are now running
> > Gerrit 2.13.9. There are some cleanup steps that need to be performed in
> > Infra land, mostly to get puppet running properly again.
> >
> > You will also notice that newer Gerrit behaves in some new and exciting
> > ways. Most of these should be improvements like not needing to reapprove
> > changes that already have a +1 Workflow but also have a +1 Verified;
> > recheck should now work for these cases. If you find a new behavior that
> > looks like a bug please let us know, but we should also work to file
> > them upstream so that newer Gerrit can address them.
> >
> > Feel free to ask us questions if anything else comes up.
> >
> > Thank you to everyone that helped with the upgrade. Seems like these get
> > more and more difficult with each Gerrit release so all the help is
> > greatly appreciated.
> >
> > Clark
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] review.openstack.org downtime and Gerrit upgrade TODAY 15:00 UTC - 23:59 UTC

2017-09-20 Thread Paul Bourke

I had the following bookmark:

status:open is:mergeable (project:openstack/kolla OR 
project:openstack/kolla-ansible) NOT label:Code-Review>=-2,self NOT 
owner:pauldbourke NOT age:1month branch:master


Which basically means, find changes that are:

* open
* in kolla or kolla-ansible
* not reviewed by me
* not owned by me
* not older than a month
* on the master branch

This seems to no longer work unless I remove the 
'label:Code-Review>=-2,self'.


Anyone else having similar issues?

On 19/09/17 00:58, Clark Boylan wrote:

On Mon, Sep 18, 2017, at 06:43 AM, Andreas Jaeger wrote:

Just a friendly reminder that the upgrade will happen TODAY, Monday
18th, starting at 15:00 UTC. The infra team expects that it takes 8
hours, so until 2359 UTC.


This work was functionally completed at 23:43 UTC. We are now running
Gerrit 2.13.9. There are some cleanup steps that need to be performed in
Infra land, mostly to get puppet running properly again.

You will also notice that newer Gerrit behaves in some new and exciting
ways. Most of these should be improvements like not needing to reapprove
changes that already have a +1 Workflow but also have a +1 Verified;
recheck should now work for these cases. If you find a new behavior that
looks like a bug please let us know, but we should also work to file
them upstream so that newer Gerrit can address them.

Feel free to ask us questions if anything else comes up.

Thank you to everyone that helped with the upgrade. Seems like these get
more and more difficult with each Gerrit release so all the help is
greatly appreciated.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][QA][group-based-policy][zaqar][packaging_deb][fuel][networking-*] Marking <= mitaka EOL

2017-09-20 Thread Andreas Jaeger
On 2017-08-24 05:14, Tony Breeds wrote:
> Hello all,
> We have a number of old stable/* branches hanging around and I'd
> like to mark anything <= stable/mitaka as EOL.  I've highlighted a few
> projects on the subject line:
> 
> QA: Are the older branches of grenade safe to go?  IIUC we don't use
> them as we don't do grenade testing on $oldest stable branch
> group-based-policy: In the past you've requested your old branches stay
> around Do you still need this?  Is there value in
> the *all* staying active?
> zaqar: I see that liberty was EOL and then reactivated do you still need
>liberty2?
> packaging_deb: As these repos have the $project origin using the
>standard series-eol tag doesn't make sense  for exaxple
>deb-nova gets a mitaka-eol from the nova repo.   So I've
>picked mitaka-eol-dpkg.
> fuel, networking-*: There are several entries for these projects groups
> so I'm calling them out here for attention.
> 
> I'm proposing we do this removal during the PTG.  Once we've done the
> series based branches we can look at old versioned releases like
> stable/16.04 etc.

So, for fuel we have stable/7.0 etc - what are the plans for these? Can
we retire them as well?

Those are even older AFAIK,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Marking <= mitaka EOL

2017-09-20 Thread Joshua Hesketh
Hi All,

I've processed the list that Tony sent through this morning, removing the
branches and tagging their positions as described.

The only exception being that openstack/zaqar doesn't have stable/liberty
or stable/liberty2 branches to EOL.

Let me know if I've missed anything.

Cheers,
Josh

On Wed, Sep 20, 2017 at 11:03 AM, Tony Breeds 
wrote:

> On Thu, Aug 24, 2017 at 01:14:56PM +1000, Tony Breeds wrote:
> > Hello all,
> > We have a number of old stable/* branches hanging around and I'd
> > like to mark anything <= stable/mitaka as EOL.  I've highlighted a few
> > projects on the subject line:
> >
> > QA: Are the older branches of grenade safe to go?  IIUC we don't use
> > them as we don't do grenade testing on $oldest stable branch
> > group-based-policy: In the past you've requested your old branches stay
> > around Do you still need this?  Is there value in
> > the *all* staying active?
> > zaqar: I see that liberty was EOL and then reactivated do you still need
> >liberty2?
> > packaging_deb: As these repos have the $project origin using the
> >standard series-eol tag doesn't make sense  for exaxple
> >deb-nova gets a mitaka-eol from the nova repo.   So I've
> >picked mitaka-eol-dpkg.
> > fuel, networking-*: There are several entries for these projects groups
> > so I'm calling them out here for attention.
> >
> > I'm proposing we do this removal during the PTG.  Once we've done the
> > series based branches we can look at old versioned releases like
> > stable/16.04 etc.
> >
> > It's hard to present the data in a clear way so given infra will be the
> > ultimate actioners of this list I present this as a shell script:
>
> Per previous discussion here's the list to EOL everything <= mitaka
> except for:
> openstack/group-based-policy,
> openstack/group-based-policy-automation,
> openstack/group-based-policy-ui,
> openstack/python-group-based-policy-client
> and openstack/networking-bigswitch
>
> ---
> eol-branch.sh -- stable/essex essex-eol openstack/anvil
> eol-branch.sh -- stable/folsom folsom-eol openstack/anvil
> eol-branch.sh -- stable/grizzly grizzly-eol openstack/anvil
> eol-branch.sh -- stable/havana havana-eol openstack/openstack
> eol-branch.sh -- stable/icehouse icehouse-eol \
>  openstack/astara openstack/networking-brocade \
>  openstack/networking-cisco openstack/networking-mlnx \
>  openstack/networking-odl openstack/networking-plumgrid \
>  openstack/nova-solver-scheduler \
>  openstack/sahara-image-elements openstack/tricircle \
>  openstack/trio2o openstack/vmware-nsx
> eol-branch.sh -- stable/icehouse icehouse-eol-dpkg \
>  openstack/deb-networking-cisco \
>  openstack/deb-networking-mlnx openstack/deb-networking-odl
> eol-branch.sh -- stable/juno juno-eol \
>  openstack/astara openstack/astara-appliance \
>  openstack/astara-horizon openstack/astara-neutron \
>  openstack/group-based-policy \
>  openstack/group-based-policy-automation \
>  openstack/group-based-policy-ui openstack/mistral \
>  openstack/mistral-dashboard openstack/mistral-extra \
>  openstack/networking-bigswitch \
>  openstack/networking-brocade openstack/networking-cisco \
>  openstack/networking-mlnx openstack/networking-odl \
>  openstack/networking-plumgrid \
>  openstack/nova-solver-scheduler \
>  openstack/openstack-resource-agents \
>  openstack/powervc-driver openstack/proliantutils \
>  openstack/puppet-n1k-vsm openstack/puppet-vswitch \
>  openstack/python-group-based-policy-client \
>  openstack/python-mistralclient \
>  openstack/python-muranoclient openstack/vmware-nsx
> eol-branch.sh -- stable/juno juno-eol-dpkg \
>  openstack/deb-mistral openstack/deb-networking-cisco \
>  openstack/deb-networking-mlnx
> openstack/deb-networking-odl \
>  openstack/deb-python-mistralclient \
>  openstack/deb-python-muranoclient \
>  openstack/deb-python-proliantutils
> eol-branch.sh -- stable/kilo kilo-eol \
>  openstack-dev/grenade openstack/murano-apps \
>  openstack/networking-h3c openstack/requirements
> eol-branch.sh -- stable/kilo kilo-eol1 \
>  openstack/group-based-policy \
>  openstack/group-based-policy-automation \
>  openstack/group-based-policy-ui \
>  openstack/python-group-based-policy-client
> eol-branch.sh -- stable/kilo_v2 kilo_v2-eol openstack/networking-bigswitch
> eol-branch.sh 

[openstack-dev] [acceleration]Cyborg Team Meeting 2017.09.20

2017-09-20 Thread Zhipeng Huang
Hi Team,

We will resume our weekly meeting today, altho it might be a pretty laid
back one considering we just finished PTG last week. Agenda could be found
here [0] as usual

[0] https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Queens PTG: Thursday summary

2017-09-20 Thread Belmiro Moreira
Hi Brian,

as we discussed in the past the image lifecycle has been a problem for us

for a long time.



However, I have some concerns in adding/maintaining a new project only to
help the

image discovery.



At CERN we have a small set of images that we maintain and offer as
"public" images

to our users. Over the years this list has been growing because new image
releases.

We keep the old images releases with visibility "public" because old bugs
in nova

(already fixed) when live/resize/migrate instances and because we have some
usecases

that the user needs a very old release.



Discovering the latest image release is hard. So we added an image property
"recommended"

that we update when a new image release is available. Also, we patched
horizon to show

the "recommended" images first.



This helps our users to identify the latest image release but we continue
to show for

each project the full list of public images + all personal user images.
Some projects

have an image list of hundreds of images.



Having a "hidden" property as you are proposing would be great!



For now, we are planning to solve this problem using/abusing of the
visibility "community".

Changing the visibility of old images releases to "community" will hide
them from the default

"image-list" but they will continue discoverable and available.




Belmiro

On Tue, Sep 19, 2017 at 8:24 PM, Brian Rosmaita 
wrote:

> On Mon, Sep 18, 2017 at 7:47 AM, Belmiro Moreira
>  wrote:
> > Hi Brian,
> > Thanks for the sessions summaries.
> >
> > We are really interested in the image lifecycle support.
> > Can you elaborate how searchlight would help solving this problem?
>
> The role we see for searchlight is more on the image discovery end of
> the problem. The context is that we were trying to think of a small
> set of image metadata that could uniquely identify a series of images
> (os_dist, os_version, local_version) so that it would be easy for end
> users to discover the most recent revision with all the security
> updates, etc.  For example, you might have:
>
> initial release of public image: os_distro=MyOS, os_version=3.2,
> local_version=1
> security update to package P1: os_distro=MyOS, os_version=3.2,
> local_version=2
> security update to package P2: os_distro=MyOS, os_version=3.2,
> local_version=4
>
> The image_id would be different on each of these, and the operator
> would prefer that users boot from the most recent.  Suppose an
> operator also offers a pre-built database image built on each of
> these, and a pre-built LAMP stack built on each of these, etc.  Each
> would have the same os_distro and os_version value, so we'd need
> another field to distinguish them, maybe os_content (values: bare, db,
> lamp).  But then with the database image, for a particular (os_distro,
> os_version, os_content) tuple, there might be several different images
> built for the popular versions of that DB, so we'd need another field
> for that as well.  So ultimately it looks like you'd need to make a
> complicated query across several image properties, and searchlight
> would easily allow you to do that.
>
> This still leaves us with the problem of making it simple to locate
> the most recent version of each series of images, and that would be
> where something like a 'hidden' property would come in.  It's been
> proposed before, but was rejected, I think because it didn't cover
> enough use cases.  But that was pre-searchlight, so introducing a
> 'hidden' field may be a good move now.  It would be interesting to
> hear what you think about that.
>
>
> >
> > thanks,
> > Belmiro
> > CERN
> >
> > On Fri, Sep 15, 2017 at 4:46 PM, Brian Rosmaita <
> rosmaita.foss...@gmail.com>
> > wrote:
> >>
> >> For those who couldn't attend, here's a quick synopsis of what was
> >> discussed yesterday.
> >>
> >> Please consult the etherpad for each session for details.  Feel free
> >> to put questions/comments on the etherpads, and then put an item on
> >> the agenda for the weekly meeting on Thursday 21 September, and we'll
> >> continue the discussion.
> >>
> >>
> >> Complexity removal
> >> --
> >> https://etherpad.openstack.org/p/glance-queens-ptg-complexity-removal
> >>
> >> In terms of a complexity contribution barrier, everyone agreed that
> >> the domain model is the largest factor.
> >>
> >> We also agreed that simplifying it is not something that could happen
> >> in the Queens cycle.  It's probably a two-cycle effort, one cycle to
> >> ensure sufficient test coverage, and one cycle to refactor.  Given the
> >> strategic planning session yesterday, we probably wouldn't want to
> >> tackle this until after the registry is completely removed, which is
> >> projected to happen in S.
> >>
> >>
> >> Image lifecycle support
> >> ---
> >> https://etherpad.openstack.org/p/glance-queens-ptg-lifecycle
> >>
> >> We sketched out several approaches, but trying 

Re: [openstack-dev] [tripleo] Install Kubernetes in the overcloud using TripleO

2017-09-20 Thread Jiří Stránský

On 20.9.2017 10:15, Bogdan Dobrelya wrote:

On 08.06.2017 18:36, Flavio Percoco wrote:

Hey y'all,

Just wanted to give an updated on the work around tripleo+kubernetes.
This is
still far in the future but as we move tripleo to containers using
docker-cmd,
we're also working on the final goal, which is to have it run these
containers
on kubernetes.

One of the first steps is to have TripleO install Kubernetes in the
overcloud
nodes and I've moved forward with this work:

https://review.openstack.org/#/c/471759/

The patch depends on the `ceph-ansible` work and it uses the
mistral-ansible
action to deploy kubernetes by leveraging kargo. As it is, the patch
doesn't
quite work as it requires some files to be in some places (ssh keys) and a
couple of other things. None of these "things" are blockers as in they
can be
solved by just sending some patches here and there.

I thought I'd send this out as an update and to request some early
feedback on
the direction of this patch. The patch, of course, works in my local
environment
;)


Note that Kubespray (former Kargo) now supports the kubeadm tool
natively [0]. This speeds up a cluster bootstrapping from an average
25-30 min to a 9 or so. I believe this makes Kubespray a viable option
for upstream development of OpenStack overclouds managed by K8s.
Especially, bearing in mind the #deployment-time effort and all the hard
work work done by tripleo and infra teams in order to shorten the CI
jobs time.


I tried deploying with kubeadm_enable yesterday and no luck yet on 
CentOS, but i do want to get back to this as the speed up sounds 
promising :)


AIO kubernetes deployment the non-kubeadm way seemed to work fine 
(Flavio's patch above with a workaround for [2] and a small Kubespray 
fix [3]).


Jirka



By the way, here is a package review [1] for adding a kubespray-ansible
library, just ansible roles and playbooks, to RDO. I'd appreciate some
help with moving this forward, like choosing another place to host the
package, it got stuck a little bit.

[0] https://github.com/kubernetes-incubator/kubespray/issues/553
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1482524

[2] https://bugs.launchpad.net/mistral/+bug/1718384
[3] https://github.com/kubernetes-incubator/kubespray/pull/1677

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan] Weekly meeting canceled this week

2017-09-20 Thread Zhenguo Niu
Hello team,

I'll cancel tomorrow's IRC meeting due to the virtual PTG this Friday.

See you all on next meeting!

-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging][femdc]Topic names for every resource type RPC endpoint

2017-09-20 Thread Miguel Angel Ajo Pelayo
I wrote those lines.

At that time, I tried a couple a publisher and a receiver at that scale. It
was the receiver side what crashed trying to subscribe, the sender was
completely fine.

Sadly I don't keep the test examples, I should have stored them in github
or something. It shouldn't be hard to replicate though if you follow the
oslo_messaging docs.



On Wed, Sep 20, 2017 at 9:58 AM, Matthieu Simonin  wrote:

> Hello,
>
> In the Neutron docs about RPCs and Callbacks system, it is said[1] :
>
> "With the underlying oslo_messaging support for dynamic topics on the
> receiver
> we cannot implement a per “resource type + resource id” topic, rabbitmq
> seems
> to handle 1’s of topics without suffering, but creating 100’s of
> oslo_messaging receivers on different topics seems to crash."
>
> I wonder if this statements still holds for the new transports supported in
> oslo.messaging (e.g Kafka, AMQP1.0) or if it's more a design limitation.
> I'm interested in any relevant docs/links/reviews on the "topic" :).
>
> Moreover, I'm curious to get an idea on how many different resources a
> Neutron
> Agent would have to manage and thus how many oslo_messaging receivers
> would be
> required (e.g how many security groups a neutron agent has to manage ?) -
> at
> least the order of magnitude.
>
> Best,
>
> Matt
>
>
>
> [1]: https://docs.openstack.org/neutron/latest/contributor/
> internals/rpc_callbacks.html#topic-names-for-every-
> resource-type-rpc-endpoint
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [notification] not transforming HostAPI related versioned notifications

2017-09-20 Thread Balazs Gibizer



On Wed, Sep 20, 2017 at 2:37 AM, Matt Riedemann  
wrote:

On 9/19/2017 10:35 AM, Balazs Gibizer wrote:
> Hi,
>
> Similar to my earlier mail about not transforming legacy 
notifications

> in the networking area [1] now I want to propose not to transform
> HostAPI related notifications.
> We have the following legacy notifications on our TODO list [2] to 
be

> transformed:
> * HostAPI.power_action.end
> * HostAPI.power_action.start
> * HostAPI.set_enabled.end
> * HostAPI.set_enabled.start
> * HostAPI.set_maintenance.end
> * HostAPI.set_maintenance.start
>
> However os-hosts API has been depraceted since microversion 2.43. 
The
> suggested replacement is os-services API. The os-services API 
already

> emits service.update notification for every action on that API. So I
> suggest not to transform the above HostAPI notifications to the
> versioned notification format.
>
> Cheers,
> gibi
>
>
> [1]
> 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/121968.html

>
> [2] https://vntburndown-gibi.rhcloud.com/index.html
>
>
> 
__

> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This also seems reasonable to me. I had to dig up what set_enabled was
for again, but now I remember, it's basically the same thing as
enable/disable a service in the os-services API, but only implemented
for the xenapi driver.

So yeah, +1 to not converting these to versioned notifications.


Cool, thanks.




As a side question: how do you keep track of the things we 
purposefully

*aren't* going to implement for versioned notifications?


I remove them [2] from our TODO list[1] with a nice commit message 
explaining the reason [3]. Do you feel we need something more user 
facing documentations about these decisions?


Cheers,
gibi

[1] https://vntburndown-gibi.rhcloud.com/index.html
[2] 
https://github.com/gibizer/nova-versioned-notification-transformation-burndown/commits/master/to_be_transformed
[3] 
https://github.com/gibizer/nova-versioned-notification-transformation-burndown/commit/112a25aecf7e9b1f344840ae4ce150f70e75b634#diff-cd2b276ea9db6ffddf9aa78d871ab2e9






--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread Zhenguo Niu
Hi all,

I'm from Mogan team, we copied some codes/frameworks from Nova since we
want to be a Nova with a bare metal specific API.
About why reinventing the wheel, you can find more informations here [1].

I would like to know what's the decent way to show our respect to the
original authors we copied from.

After discussing with the team, we plan to do some improvements as below:

1. Adds some comments to the beginning of such files to indicate that they
leveraged the implementation of Nova.

https://github.com/openstack/mogan/blob/master/mogan/baremetal/ironic/driver.py#L19
https://github.com/openstack/mogan/blob/master/mogan/console/websocketproxy.py#L17-L18
https://github.com/openstack/mogan/blob/master/mogan/consoleauth/manager.py#L17
https://github.com/openstack/mogan/blob/master/mogan/engine/configdrive.py#L17
https://github.com/openstack/mogan/blob/master/mogan/engine/metadata.py#L18
https://github.com/openstack/mogan/blob/master/mogan/network/api.py#L18
https://github.com/openstack/mogan/blob/master/mogan/objects/aggregate.py#L17
https://github.com/openstack/mogan/blob/master/mogan/objects/keypair.py#L17
https://github.com/openstack/mogan/blob/master/mogan/objects/server_fault.py#L17
https://github.com/openstack/mogan/blob/master/mogan/objects/server_group.py#L17
https://github.com/openstack/mogan/blob/master/mogan/scheduler/client/report.py#L17
https://github.com/openstack/mogan/blob/master/mogan/scheduler/filter_scheduler.py#L17

2. For the changes we follows what nova changed, should reference to the
original authors in the commit messages.


Please let me know if there are something else we need to do or there are
already some existing principles we can follow, thanks!



[1] https://wiki.openstack.org/wiki/Mogan


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >