Re: [openstack-dev] [nova] Hold off on pushing new patches for config option cleanup

2016-08-27 Thread Matt Riedemann

On 8/22/2016 5:52 PM, Michael Still wrote:

Its a shame so many are -2'ed. There is a lot there I could have merged
yesterday if it wasn't for that.

Michael



When I went through and -2'ed things I left those that had a +2 on them, 
so the -2 was really on the stuff that was not reviewed yet, or had a -1.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][oisc] scenario #5 running

2016-08-27 Thread Steven Dake (stdake)
Scenario #6 is running – should finish in 4-6 hours.

Regards
-steve

From: Steven Dake mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Saturday, August 27, 2016 at 4:34 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla][oisc] scenario #5 running

Hey folks,

I didn't undo any of what Dave did to the cluster.  I wrote a simple shell 
script to run time on all of the operations.  We will be doing the same thing 
for liberty and mitaka.  I expect the test of all operations will take about 40 
minutes to 2 hours.

Full instructions are on the tmux screen for scenario 6 when scenario 5 
finishes.  Note you will need to undo the things Dave has done.  Ping me if you 
have questions on irc if you get to it before I do.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread HU, BIN
The challenge in OpenStack is how to enable the innovation built on top of 
OpenStack.

So telco use cases is not only the innovation built on top of OpenStack. 
Instead, telco use cases, e.g. Gluon (NFV networking), vCPE Cloud, Mobile 
Cloud, Mobile Edge Cloud, brings the needed requirement for innovation in 
OpenStack itself. If OpenStack don't address those basic requirements, the 
innovation will never happen on top of OpenStack.

An example is - self-driving car is built on top of many technologies, such as 
sensor/camera, AI, maps, middleware etc. All innovations in each technology 
(sensor/camera, AI, map, etc.) bring together the innovation of self-driving 
car.

WE NEED INNOVATION IN OPENSTACK in order to enable the innovation built on top 
of OpenStack.

Thanks
Bin
-Original Message-
From: Edward Leafe [mailto:e...@leafe.com] 
Sent: Saturday, August 27, 2016 10:49 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On Aug 27, 2016, at 12:18 PM, HU, BIN  wrote:

>> From telco perspective, those are the areas that allow innovation, and 
>> provide telco customers with new types of services.
> 
> We need innovation, starting from not limiting ourselves from bringing new 
> idea and new use cases, and bringing those impossibility to reality.

There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread Edward Leafe
On Aug 27, 2016, at 12:18 PM, HU, BIN  wrote:

>> From telco perspective, those are the areas that allow innovation, and 
>> provide telco customers with new types of services.
> 
> We need innovation, starting from not limiting ourselves from bringing new 
> idea and new use cases, and bringing those impossibility to reality.

There is innovation in OpenStack, and there is innovation in things built on 
top of OpenStack. We are simply trying to keep the two layers from getting 
confused.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [cinder] [api] [doc] API status report

2016-08-27 Thread Andreas Jaeger
On 08/26/2016 11:33 PM, Anne Gentle wrote:
> Hi cinder block storage peeps: 
> 
> I haven't heard from you on your comfort level with publishing so I went
> ahead and made the publishing job myself with this review:
> 
> https://review.openstack.org/361475
> 
> Please let me know your thoughts there. Is the document ready to
> publish? Need anything else to get comfy? Let me know.
> 

The current api-ref does not build at all, let's not merge 361475 yet.

I've rebased https://review.openstack.org/#/c/322489 and added
https://review.openstack.org/#/c/361616 so that the cinder api-ref
follows the same patterns as other repositories - including building and
review on docs-draft.

Once those two are in, we can merge 361475.

Cinder team, could you prioritize these reviews, please?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread HU, BIN
IMHO, I wouldn't limit ourselves.

If we expand our sight to view vCPE in its entirety, not any standalone VNF, it 
could be a cloud of vCPEs. It could be an enterprise cloud on top of enterprise 
vCPEs, or a community cloud across several organizations including vCPEs within 
residential communities.

There is another concept of "mobile cloud" where a cloud infrastructure is 
formed on top of mobile devices. Sounds crazy? Well, no one believed 
self-driving car could become reality so soon.

>From telco perspective, those are the areas that allow innovation, and provide 
>telco customers with new types of services.

We need innovation, starting from not limiting ourselves from bringing new idea 
and new use cases, and bringing those impossibility to reality.

Thanks
Bin
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Saturday, August 27, 2016 2:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][massively 
distributed][architecture]Coordination between actions/WGs

On 08/25/2016 06:38 PM, joehuang wrote:
> Hello, Ed,
>
> Just as Peter mentioned,  "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, 
> MEC, IoT, where we will have compute highly distributed around the network 
> (from thousands to millions of sites) ".  vCPE is only one use case, but not 
> all. And the hardware facility to run "vCDN, vEPC, vIMS, MEC" is not in 
> set-box or single hardware, even in current non-cloud way, it includes lots 
> of blades, rack servers, chasises, or racks.

Note that I have only questioned the use case of vCPE (and IoT) as "cloud use 
cases". content deliver networks, evolved packet core, and IP multimedia 
subsystem services are definitely cloud use cases, IMHO, since they belong as 
VNFs managed in a shared datacenter infrastructure.

> A whitepaper was just created "Accelerating NFV Delivery with 
> OpenStack" https://www.openstack.org/telecoms-and-nfv/

Nothing in the whitepaper above has anything to do with vCPE.

> So it's part of a cloud architecture,

No, it's not. vCPE is definitely not a "cloud architecture".

 > the challenge is how OpenStack to run "regardless of size" and in "massively 
 > distributed" manner.

No, that is not OpenStack's challenge.

It is the Telco industry's challenge to create purpose-built Telco software 
delivery mechanisms, just like it's the enterprise database and middleware 
industry's challenge to create RDBMS systems to meet the modern 
micro-service-the-world landscape in which we live.

Asking the OpenStack community to solve a very specific Telco application 
delivery need is like asking the OpenStack community to write a relational 
database system that works best on 10 million IoT devices. It's just not in our 
list of problem domains to tackle.

Best,
-jay

> Best Regards
> Chaoyi Huang (joehuang)
> 
> From: Ed Leafe [e...@leafe.com]
> Sent: 25 August 2016 22:03
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all][massively 
> distributed][architecture] Coordination between actions/WGs
>
> On Aug 24, 2016, at 8:42 PM, joehuang  wrote:
>>
>> Funny point of view. Let's look at the mission of OpenStack:
>>
>> "to produce the ubiquitous Open Source Cloud Computing platform that 
>> enables building interoperable public and private clouds regardless 
>> of size, by being simple to implement and massively scalable while serving 
>> the cloud users'
>> needs."
>>
>> It mentioned that "regardless of size", and you also mentioned "cloud to me:
>> lots of hardware consolidation".
>
> If it isn't part of a cloud architecture, then it isn't part of OpenStack's 
> mission. The 'size' qualifier relates to everything from massive clouds like 
> CERN and Walmart down to small private clouds. It doesn't mean 'any sort of 
> computing platform'; the focus is clear that we are an "Open Source Cloud 
> Computing platform".
>
>
> -- Ed Leafe
>
>
>
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: o

Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-27 Thread Davanum Srinivas
LOL Thierry!

On Sat, Aug 27, 2016 at 8:44 AM, Thierry Carrez  wrote:
> Jay Pipes wrote:
>> [...]
>> However, I have not heard vCPE described in that way. v[E]CPE is all
>> about enabling a different kind of application delivery for Telco
>> products/services. Instead of sending the customer new hardware -- or
>> installing a giant monolith application with feature toggles all over
>> the place -- the Telco delivers to the customer a set-top box that has
>> the ability to pull virtual machine images with an application that the
>> customer desires.
>
> I'll defer to your acute knowledge of all those cryptic acronyms. On the
> flip side, that means next time I wonder what (for example) vIMS could
> mean, I'll ask you. (would be great if it was an attempt at virtualizing
> and cloning dims)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A daily life of a dev in ${name_your_fav_project}

2016-08-27 Thread Gregory Haynes
On Fri, Aug 26, 2016, at 11:03 AM, Joshua Harlow wrote:
> Hi folks (dev and more!),
> 
> I was having a conversation with some folks at godaddy around our future 
> plans for a developer lab (where we can have various setups of 
> networking, compute, storage...) for 'exploring' purposes (testing out a 
> new LBAAS for example or ...) and as well as for 'developer' purposes 
> (aka, the 'I need to reproduce a bug or work on a feature that requires 
> having a setup mimicking closer to what we have in staging or
> production').
> 
> And it got me thinking about how other developers (and other companies) 
> are doing this. Do various companies have shared labs that their 
> developers get partitions of for (periods of) usage (for example for a 
> volume vendor I would expect this) or if you are a networking company do 
> you hand out miniature networks (with associated gear) as needed (or do 
> you build out such labs via SDN and software only)?
> 
> Then of course there are the people developing libraries (somewhat of my 
> territory), part of that development can just be done locally and 
> running of tox and such via that, but often times even that is not 
> sufficient (for example pick oslo.messaging or oslo.db, altering this in 
> ways that could pass unittests could still end up breaking its 
> integration with other projects); so the gate helps here (but the gate 
> really is a 'last barrier') so have folks that have been working on say 
> zeromq or the newer amqp versions, what is the daily life of testing and 
> exploring features and development for you?
> 
> Are any of the environments that people may be getting build-out on 
> demand (ie in a cloud-like manner)? For example I could see how it could 
> be pretty nifty to request a environment be built out with say something 
> like the following as a descriptor language:
> 
> build_out:
> nova:
>git_url: git://git.openstack.org/openstack/nova
>git_ref: 
> neutron:
>git_url: 
>git_ref: my sha
> 
> topology:
>use_standard_config: true
>build_with_switch_type: XYZ...

I've been playing around with using diskimage-builder to build images
using an input which looks amazingly similar to the top half of this
[1].  I haven't quite taken the plunge to use this for my usual dev
environment but my hope has been to use this + the built in dib caching
+ docker/other container/qcow2 outputting to perform more realistic
tests of what I've been deving. The nifty bit is we've had tooling to
override the location of any repositories in order to be useful with
Zuul so it is trivial to support a set of input's like this and then
override them to refer to an on disk location (rather than git.o.o)[2].

> 
> I hope this info is not just useful to myself (and maybe it's been 
> talked about before, but nothing of recent that I can recall) and I'd be 
> very much interested in hearing what other companies (big and small) are 
> doing here (and also from folks that are not associated with any 
> company, which I guess brings in the question of the OSIC lab).
> 
> -Josh
> 

Cheers,
Greg

[1]:
https://review.openstack.org/#/c/336933/2/elements/python-apps/README.rst
[2]:
http://docs.openstack.org/developer/diskimage-builder/elements/source-repositories/README.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-27 Thread Thierry Carrez
Jay Pipes wrote:
> [...]
> However, I have not heard vCPE described in that way. v[E]CPE is all
> about enabling a different kind of application delivery for Telco
> products/services. Instead of sending the customer new hardware -- or
> installing a giant monolith application with feature toggles all over
> the place -- the Telco delivers to the customer a set-top box that has
> the ability to pull virtual machine images with an application that the
> customer desires.

I'll defer to your acute knowledge of all those cryptic acronyms. On the
flip side, that means next time I wonder what (for example) vIMS could
mean, I'll ask you. (would be great if it was an attempt at virtualizing
and cloning dims)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][oisc] scenario #5 running

2016-08-27 Thread Steven Dake (stdake)
Hey folks,

I didn't undo any of what Dave did to the cluster.  I wrote a simple shell 
script to run time on all of the operations.  We will be doing the same thing 
for liberty and mitaka.  I expect the test of all operations will take about 40 
minutes to 2 hours.

Full instructions are on the tmux screen for scenario 6 when scenario 5 
finishes.  Note you will need to undo the things Dave has done.  Ping me if you 
have questions on irc if you get to it before I do.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-27 Thread Jay Pipes

On 08/25/2016 06:38 PM, joehuang wrote:

Hello, Ed,

Just as Peter mentioned,  "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, IoT, where we 
will have compute highly distributed around the network (from thousands to millions of sites) 
".  vCPE is only one use case, but not all. And the hardware facility to run "vCDN, vEPC, 
vIMS, MEC" is not in set-box or single hardware, even in current non-cloud way, it includes 
lots of blades, rack servers, chasises, or racks.


Note that I have only questioned the use case of vCPE (and IoT) as 
"cloud use cases". content deliver networks, evolved packet core, and IP 
multimedia subsystem services are definitely cloud use cases, IMHO, 
since they belong as VNFs managed in a shared datacenter infrastructure.



A whitepaper was just created "Accelerating NFV Delivery with OpenStack" 
https://www.openstack.org/telecoms-and-nfv/


Nothing in the whitepaper above has anything to do with vCPE.


So it's part of a cloud architecture,


No, it's not. vCPE is definitely not a "cloud architecture".

> the challenge is how OpenStack to run "regardless of size" and in 
"massively distributed" manner.


No, that is not OpenStack's challenge.

It is the Telco industry's challenge to create purpose-built Telco 
software delivery mechanisms, just like it's the enterprise database and 
middleware industry's challenge to create RDBMS systems to meet the 
modern micro-service-the-world landscape in which we live.


Asking the OpenStack community to solve a very specific Telco 
application delivery need is like asking the OpenStack community to 
write a relational database system that works best on 10 million IoT 
devices. It's just not in our list of problem domains to tackle.


Best,
-jay


Best Regards
Chaoyi Huang (joehuang)

From: Ed Leafe [e...@leafe.com]
Sent: 25 August 2016 22:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

On Aug 24, 2016, at 8:42 PM, joehuang  wrote:


Funny point of view. Let's look at the mission of OpenStack:

"to produce the ubiquitous Open Source Cloud Computing platform that enables
building interoperable public and private clouds regardless of size, by being
simple to implement and massively scalable while serving the cloud users'
needs."

It mentioned that "regardless of size", and you also mentioned "cloud to me:
lots of hardware consolidation".


If it isn’t part of a cloud architecture, then it isn’t part of OpenStack’s mission. 
The ‘size’ qualifier relates to everything from massive clouds like CERN and Walmart 
down to small private clouds. It doesn’t mean ‘any sort of computing platform’; the 
focus is clear that we are an "Open Source Cloud Computing platform”.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-27 Thread Jay Pipes

On 08/25/2016 11:08 AM, Thierry Carrez wrote:

Jay Pipes wrote:

[...]
How is vCPE a *cloud* use case?

From what I understand, the v[E]CPE use case is essentially that Telcos
want to have the set-top boxen/routers that are running cable television
apps (i.e. AT&T U-verse or Verizon FiOS-like things for US-based
customers) and home networking systems (broadband connectivity to a
local central office or point of presence, etc) be able run on virtual
machines to make deployment and management of new applications easier.
Since all those home routers and set-top boxen are essentially just
Linux boxes, the infrastructure seems to be there to make this a
cost-savings reality for Telcos. [1]

The problem is that that isn't remotely a cloud use case. Or at least,
it doesn't describe what I think of as cloud.
[...]


My read on that is that they want to build a cloud using the computing
power in those set-top boxes and be able to distribute workloads to them
(in an API/cloudy manner). So yes, essentially nova-compute nodes on
those set-top boxes. It feels like that use case fits your description
of "cloud", only their datacenter ends up being distributed in their
customers homes (and conveniently using your own electricity/cooling) ?


That would indeed be interesting, even if far-fetched. [1]

However, I have not heard vCPE described in that way. v[E]CPE is all 
about enabling a different kind of application delivery for Telco 
products/services. Instead of sending the customer new hardware -- or 
installing a giant monolith application with feature toggles all over 
the place -- the Telco delivers to the customer a set-top box that has 
the ability to pull virtual machine images with an application that the 
customer desires.


What vCPE is about is co-opting the term "cloud" to mean changing the 
delivery mechanism for Telco software. [2]


Like you said on April 1st, Thierry, "on the Internet of Things, nobody 
knows you're a fridge".


The problem with vCPE is that it's essentially playing an April Fool's 
joke on the cloud management software industry. "In vCPE, nobody knows 
you're not actually a cloud, but instead you're a $5 whitelabel router 
sitting underneath a pile of sweaters in a closet."


Best,
-jay

[1] I look forward to the OpenStack Cloud powered by 10 million Apple 
Watches. Actually no, I don't. That sounds like a nightmare to me.


[2] To be perfectly clear, I have nothing against Telcos wanting to 
change their method of software delivery. Go for it! Embrace modern 
delivery mechanisms. But, that ain't cloud and it ain't OpenStack, IMHO.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova] Neutron mid-cycle summary report

2016-08-27 Thread Miguel Angel Ajo Pelayo
Hi Armando,

Thanks for the report, I'm adding some notes inline (OSC/SDK)

On Sat, Aug 27, 2016 at 2:13 AM, Armando M.  wrote:
> Hi Neutrinos,
>
> For those of you who couldn't join in person, please find a few notes below
> to capture some of the highlights of the event.
>
> I would like to thank everyone one who helped me put this report together,
> and everyone who helped make this mid-cycle a fruitful one.
>
> I would also like to thank IBM, and the individual organizers who made
> everything go smoothly. In particular Martin, who put up with our moody
> requests: thanks Martin!!
>
> Feel free to reach out/add if something is unclear, incorrect or incomplete.
>
> Cheers,
> Armando
>
> ~~~
>
> We touched on these topics (as initially proposed on
> https://etherpad.openstack.org/p/newton-neutron-midcycle-workitems)
>
> Keystone v3 and project-id adoption:
>
> dasm and amotoki have been working to making the Neutron server process
> project-id correctly [1]. Looking at the spec [2], we are half way through
> having completed the DB migration, being Keystone v3 complaint, and having
> updated the client bindings [3].
>
> [1] https://review.openstack.org/#/c/357977/
> [2] https://review.openstack.org/#/c/257362/
> [3] https://review.openstack.org/#/q/topic:bp/keystone-v3
>
> Neutron-lib:
>
> HenryG, dougwig and kevinbenton worked out a plan to get the common_db_mixin
> into neutron-lib. Because of the risk of regression, this is being deferred
> until Ocata opens up. However, simpler changes like the he model_base move
> to lib was agreed on and merged.
> A plan to provide test support was discussed. The current strategy involves
> providing test base classes in lib (this reverses the stance conveyed in
> Austin). The usual steps involved require to making public the currently
> private classes, ensure the lib's copies are up-to-date with core neutron,
> and deprecate the ones located in Neutron.
> rtheis and armax worked on having networking-ovn test periodically against
> neutron-lib [1,2,3].
>
> [1] https://review.openstack.org/#/c/357086/
> [2] https://review.openstack.org/#/c/359143/
> [3] https://review.openstack.org/#/c/357079/
>
> A tool (tools/migration_report.sh) helps project team determine the level of
> dependency they have with Neutron. It should be improved to report the exact
> offending imports.
> Right now neutron-lib 0.4.0 is released and available in
> global-requirements/upper-constraints.
>
> Objects and hitless upgrades:
>
> Ihar gave the team an overview and status update [1]
> There was a fruitful discussion that hopefully set the way forward for
> Ocata. The discussed plan was to start Ocata with the expectation that no
> new contract scripts are landing in Ocata, and to revisit the requirement
> later if for some reason we see any issue with applying the requirement in
> practice.
> Some work was done to deliver necessary objects for push-notifications.
> Patches up for review. Some review cycles were spent to work on landing
> patches moving model definitions under neutron/db/models
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101838.html
>
> OSC transition:
>
> rtheis gave an update to the team on the state of the transition. Core
> resources commands are all available through OSC; QoS, Metering and *-aaS
> are still not converted.

QoS is being pushed up by Rodolfo in a series of bugs on SDK/OSC:
  
https://review.openstack.org/#/q/owner:rodolfo.alonso.hernandez%2540intel.com+status:open

Those are almost there.

> There is some confusion on how to tackle openstacksdk support. We discussed
> the future goal of python binding of Networking API. OSC uses OpenStack SDK
> for network commands and Neutron OSC plugin uses python bindings from
> python-neutronclient. A question is to which project developers who add new
> features implement, both, openstack SDK or python-neutronclient? There was
> no conclusion at the mid-cycle. It is not specific to neutron. Similar
> situation can happen for nova, cinder and other projects and we need to
> raise it to the community.
>
> Ocata is going to be the first release where the neutronclient CLI is
> officially deprecated. It may take us more than the usual two cycles to
> remove it altogether, but that's a signal to developer and users to
> seriously develop against OSC, and report bugs against OSC.
> Several pending contributions into osc-lib.
> An update is available on [1,2]
>
> [1] https://review.openstack.org/#/c/357844/
> [2] https://etherpad.openstack.org/p/osc-neutron-support
>
> Stability squash:
>
> armax was bug deputy for the week of the mid-cycle; nothing critical showed
> up in the gate, however pluggable ipam [1] switch merged, which might have
> some unexpected repercussions down the road.
> A number of bugs older than a year were made expirable [2].
> kevinbenton and armax devised a strategy and started working on [3] to
> ensure DB retriable errors are no longer handled at the API layer.
> T