[openstack-dev] [docs] New Four Opens Project

2018-11-12 Thread Chris Hoge
Earlier this year, the OpenStack Foundation staff had the opportunity to 
brainstorm some ideas about how to express the values behind The Four Opens and 
how they are applied in practice. As the Foundation grows in scope to include 
new strategic focus areas and new projects, we felt it was important to provide 
explanation and guidance on the principles that guide our community.

We’ve collected these notes and have written some seeds to start this document. 
I’ve staged this work into github and have prepared a review to move the work 
into OpenStack hosting, turning this over to the community to help guide and 
shape the document.

This is very much a work in progress, but we have a goal to polish this up and 
make it an important document that captures our vision and values for the 
OpenStack development community, guides the establishment of governance for new 
top-level projects, and is a reference for the open-source development 
community as a whole.

I also want to be clear that the original Four Opens, as listed in the 
OpenStack governance page, is an OpenStack TC document. This project doesn’t 
change that. Instead, it is meant to be applied to the Foundation as a whole 
and be a reference to the new projects that land both as pilot top-level 
projects and projects hosted by our new infrastructure efforts.

Thanks to all of the original authors of the Four-Opens for your visionary work 
that started this process, and thanks in advance to the community members who 
will continue to grow and evolve this document.

Chris Hoge
OpenStack Foundation

Four Opens: https://governance.openstack.org/tc/reference/opens.html
New Project Review Patch: https://review.openstack.org/#/c/617005/
Four Opens Document Staging: https://github.com/hogepodge/four-opens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Foundation Community Meeting - October 24 - StarlingX

2018-10-23 Thread Chris Hoge
On Wednesday, October 24 we will host our next Foundation community
meeting at 8:00 PT / 15:00 UTC. This meeting will focus on an update
on StarlingX, one of the projects in the Edge Computing Strategic Focus
Area.

The full agenda is here:
https://etherpad.openstack.org/p/openstack-community-meeting

Do you have something you'd like to discuss or share with the community?
Please share them with me so that I can schedule them for future meetings.

Thanks,
Chris

BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20181024T15Z
DTEND:20181024T16Z
DTSTAMP:20181015T174244Z
ORGANIZER;CN=cla...@openstack.org:mailto:cla...@openstack.org
UID:DB8C8EB6-5D6A-4DF9-AE1C-F8483DAAE005
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Ildiko Vancsa;X-NUM-GUESTS=0:mailto:ild...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=Allison Price;X-NUM-GUESTS=0:mailto:alli...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=cla...@openstack.org;X-NUM-GUESTS=0:mailto:cla...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Ian Jolliffe;X-NUM-GUESTS=0:mailto:ian.jolli...@windriver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=scott.w.doene...@intel.com;X-NUM-GUESTS=0:mailto:scott.w.doenecke@i
 ntel.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Bruce E Jones;X-NUM-GUESTS=0:mailto:bruce.e.jo...@intel.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Chris Hoge;X-NUM-GUESTS=0:mailto:ch...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Jeff;X-NUM-GUESTS=0:mailto:jeff.go...@windriver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=glenn.sei...@windriver.com;X-NUM-GUESTS=0:mailto:glenn.seiler@windr
 iver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Jennifer Fowler;X-NUM-GUESTS=0:mailto:jenni...@cathey.co
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Lauren Sell;X-NUM-GUESTS=0:mailto:lau...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Robert Cathey;X-NUM-GUESTS=0:mailto:rob...@cathey.co
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=starlingx-disc...@lists.starlingx.io;X-NUM-GUESTS=0:mailto:starling
 x-disc...@lists.starlingx.io
URL:https://zoom.us/j/112003649
CREATED:20181003T170850Z
DESCRIPTION:https://etherpad.openstack.org/p/openstack-community-meeting\n\
 n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~:~:~::~:~::-\nPlease do not edit this section of the description.\n\nView 
 your event at https://www.google.com/calendar/event?action=VIEW&eid=XzhoMTN
 nZ3BvOGwxM2NiOWw4Z3I0MmI5azhoMzNpYmExOGtvazZiYTY3MHEzZ2NxNDg1MGthYzFnNmsgc3
 Rhcmxpbmd4LWRpc2N1c3NAbGlzdHMuc3Rhcmxpbmd4Lmlv&tok=MjAjY2xhaXJlQG9wZW5zdGFj
 ay5vcmc0YjdkMzYwMzNkY2NjNjAxNGFmYjQ4Y2MzMGY4NGU3NGRkNmI0OTU3&ctz=America%2F
 Chicago&hl=en&es=1.\n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-
LAST-MODIFIED:20181015T174243Z
LOCATION:https://zoom.us/j/112003649
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:StarlingX First Release\, Community Webinar
TRANSP:OPAQUE
X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
END:VEVENT
END:VCALENDAR
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Community Meeting - October 10 - Strategic Area Governance Update

2018-10-04 Thread Chris Hoge
Following the interest in the first OpenStack Foundation community
meeting, where we discussed the OpenStack Rocky release as well as quick
updates from Kata, Airship, StarlingX and Zuul, we want to keep the
community meetings going. The second community meeting will be October 10
at 8:00 PT / 15:00 UTC, and the agenda is for Jonathan Bryce and Thierry
Carrez to share the latest plans for strategic project governance at the
Foundation. These updates will include the process for creating new
Strategic Focus Areas, and the lifecycle of new Foundation supported
projects. We will have an opportunity to share feedback as well as a
question and answer session at the end of the presentation.

For a little context about the proposed plan for strategic project
governance, you can read Jonathan’s email to the Foundation mailing list:
http://lists.openstack.org/pipermail/foundation/2018-August/002617.html 
<http://lists.openstack.org/pipermail/foundation/2018-August/002617.html>

This meeting will be recorded and made publicly available. This is part
of our plan to introduce bi-weekly OpenStack Foundation community meetings
that will cover topics like Foundation strategic area updates, project
demonstrations, and other community efforts. We expect the next meeting
to take place October 24 and focus on the anticipated StarlingX release.
Do you have something you'd like to discuss or share with the community?
Please share them with me so that I can schedule them for future meetings.

OpenStack Community Meeting - Strategic Area Governance Update
Date & Time: October 10, 8:00 PT / 15:00 UTC
Zoom Meeting Link: https://zoom.us/j/312447172 <https://zoom.us/j/312447172>
Agenda: https://etherpad.openstack.org/p/openstack-community-meeting 
<https://etherpad.openstack.org/p/openstack-community-meeting>

Thanks!
Chris Hoge
Strategic Program Manager
OpenStack Foundation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][magnum][zun] Notification of removal of in-tree K8s OpenStack Provider

2018-10-02 Thread Chris Hoge
For those projects that use OpenStack as a cloud provider for K8s, there
is a patch in flight[1] to remove the in-tree OpenStack provider from the
kubernetes/kubernetes repository. The provider has been deprecated for
two releases, with a replacement external provider available[2]. Before
we merge this patch for the 1.13 K8s release cycle, we want to make sure
that projects dependent on the in-tree provider (expecially thinking
about projects like Magnum and Zun) have an opportunity to express their
readiness to switch over.

[1] https://github.com/kubernetes/kubernetes/pull/67782
[2] https://github.com/kubernetes/cloud-provider-openstack


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][tc] List of OpenStack and K8s Community Updates

2018-09-27 Thread Chris Hoge
In the last year the SIG-K8s/SIG-OpenStack group has facilitated quite
a bit of discussion between the OpenStack and Kubernetes communities.
In doing this work we've delivered a number of presentations and held
several working sessions. I've created an etherpad that contains links
to these documents as a reference to the work and the progress we've
made. I'll continue to keep the document updated, and if I've missed
any links please feel free to add them.

https://etherpad.openstack.org/p/k8s-openstack-updates

-Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s] SIG-K8s PTG Meetings, Monday September 10, 2018

2018-09-09 Thread Chris Hoge
SIG-K8s has space reserved in Ballroom A for all of Monday, September 10
at the PTG. We will begin at 9:00 with a planning session, similar to that
in Dublin, where we will organize topics and times for the remainder of
the day.

The planning etherpad can be found here: 
https://etherpad.openstack.org/p/sig-k8s-2018-denver-ptg
The link to the Dublin planning etherpad: 
https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg

Thanks,
Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [all] Bringing the community together (combine the lists!)

2018-08-30 Thread Chris Hoge
I also propose that we merge the interop-wg mailing list also,
as the volume on that list is small but topics posted to it are of
general interest to the community.

Chris Hoge
(Interop WG Secretary, amongst other things)

> On Aug 30, 2018, at 10:03 AM, Jeremy Stanley  wrote:
> 
> The openstack, openstack-dev, openstack-sigs and openstack-operators
> mailing lists on lists.openstack.org see an increasing amount of
> cross-posting and thread fragmentation as conversants attempt to
> reach various corners of our community with topics of interest to
> one or more (and sometimes all) of those overlapping groups of
> subscribers. For some time we've been discussing and trying ways to
> bring our developers, distributors, operators and end users together
> into a less isolated, more cohesive community. An option which keeps
> coming up is to combine these different but overlapping mailing
> lists into one single discussion list. As we covered[1] in Vancouver
> at the last Forum there are a lot of potential up-sides:
> 
> 1. People with questions are no longer asking them in a different
> place than many of the people who have the answers to those
> questions (the "not for usage questions" in the openstack-dev ML
> title only serves to drive the wedge between developers and users
> deeper).
> 
> 2. The openstack-sigs mailing list hasn't seem much uptake (an order
> of magnitude fewer subscribers and posts) compared to the other
> three lists, yet it was intended to bridge the communication gap
> between them; combining those lists would have been a better
> solution to the problem than adding yet another turned out to be.
> 
> 3. At least one out of every ten messages to any of these lists is
> cross-posted to one or more of the others, because we have topics
> that span across these divided groups yet nobody is quite sure which
> one is the best venue for them; combining would eliminate the
> fragmented/duplicative/divergent discussion which results from
> participants following up on the different subsets of lists to which
> they're subscribed,
> 
> 4. Half of the people who are actively posting to at least one of
> the four lists subscribe to two or more, and a quarter to three if
> not all four; they would no longer be receiving multiple copies of
> the various cross-posts if these lists were combined.
> 
> The proposal is simple: create a new openstack-discuss mailing list
> to cover all the above sorts of discussion and stop using the other
> four. As the OpenStack ecosystem continues to mature and its
> software and services stabilize, the nature of our discourse is
> changing (becoming increasingly focused with fewer heated debates,
> distilling to a more manageable volume), so this option is looking
> much more attractive than in the past. That's not to say it's quiet
> (we're looking at roughly 40 messages a day across them on average,
> after deduplicating the cross-posts), but we've grown accustomed to
> tagging the subjects of these messages to make it easier for other
> participants to quickly filter topics which are relevant to them and
> so would want a good set of guidelines on how to do so for the
> combined list (a suggested set is already being brainstormed[2]).
> None of this is set in stone of course, and I expect a lot of
> continued discussion across these lists (oh, the irony) while we try
> to settle on a plan, so definitely please follow up with your
> questions, concerns, ideas, et cetera.
> 
> As an aside, some of you have probably also seen me talking about
> experiments I've been doing with Mailman 3... I'm hoping new
> features in its Hyperkitty and Postorius WebUIs make some of this
> easier or more accessible to casual participants (particularly in
> light of the combined list scenario), but none of the plan above
> hinges on MM3 and should be entirely doable with the MM2 version
> we're currently using.
> 
> Also, in case you were wondering, no the irony of cross-posting this
> message to four mailing lists is not lost on me. ;)
> 
> [1] https://etherpad.openstack.org/p/YVR-ops-devs-one-community
> [2] https://etherpad.openstack.org/p/common-openstack-ml-topics
> -- 
> Jeremy Stanley
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] K8s Conformance Testing

2018-08-21 Thread Chris Hoge
As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions,
we're moving forward with obtaining Kubernetes Conformance certification for
Magnum. While conformance test jobs aren't reliably running in the gate yet,
the requirements of the program make sumbitting results manually on an
infrequent basis something that we can work with while we wait for more
stable nested virtualization resources. The OpenStack Foundation has signed
the license agreement, and Feilong Wang is preparing an initial conformance
run to submit for certification.

My thanks to the Magnum team for their impressive work on building out an
API for deploying Kubernetes on OpenStack clusters.

[1] https://www.cncf.io/certification/software-conformance/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][sdk] Integrating OpenStack and k8s with a service broker

2018-06-06 Thread Chris Hoge
Hi Zane,

Do you think this effort would make sense as a subproject within the Cloud
Provider OpenStack repository hosted within the Kubernetes org? We have
a solid group of people working on the cloud provider, and while it’s not
the same code, it’s a collection of the same expertise and test resources.

Even if it's hosted as an OpenStack project, we should still make sure
we have documentation and pointers from the kubernetes/cloud-provider-openstack
to guide users in the right direction.

While I'm not in a position to directly contribute, I'm happy to offer
any support I can through the SIG-OpenStack and SIG-Cloud-Provider
roles I have in the K8s community.

-Chris

> On Jun 5, 2018, at 9:19 AM, Zane Bitter  wrote:
> 
> I've been doing some investigation into the Service Catalog in Kubernetes and 
> how we can get OpenStack resources to show up in the catalog for use by 
> applications running in Kubernetes. (The Big 3 public clouds already support 
> this.) The short answer is via an implementation of something called the Open 
> Service Broker API, but there are shortcuts available to make it easier to do.
> 
> I'm convinced that this is readily achievable and something we ought to do as 
> a community.
> 
> I've put together a (long-winded) FAQ below to answer all of your questions 
> about it.
> 
> Would you be interested in working on a new project to implement this 
> integration? Reply to this thread and let's collect a list of volunteers to 
> form the initial core review team.
> 
> cheers,
> Zane.
> 
> 
> What is the Open Service Broker API?
> 
> 
> The Open Service Broker API[1] is a standard way to expose external resources 
> to applications running in a PaaS. It was originally developed in the context 
> of CloudFoundry, but the same standard was adopted by Kubernetes (and hence 
> OpenShift) in the form of the Service Catalog extension[2]. (The Service 
> Catalog in Kubernetes is the component that calls out to a service broker.) 
> So a single implementation can cover the most popular open-source PaaS 
> offerings.
> 
> In many cases, the services take the form of simply a pre-packaged 
> application that also runs inside the PaaS. But they don't have to be - 
> services can be anything. Provisioning via the service broker ensures that 
> the services requested are tied in to the PaaS's orchestration of the 
> application's lifecycle.
> 
> (This is certainly not the be-all and end-all of integration between 
> OpenStack and containers - we also need ways to tie PaaS-based applications 
> into the OpenStack's orchestration of a larger group of resources. Some 
> applications may even use both. But it's an important part of the story.)
> 
> What sorts of services would OpenStack expose?
> --
> 
> Some example use cases might be:
> 
> * The application needs a reliable message queue. Rather than spinning up 
> multiple storage-backed containers with anti-affinity policies and dealing 
> with the overhead of managing e.g. RabbitMQ, the application requests a Zaqar 
> queue from an OpenStack cloud. The overhead of running the queueing service 
> is amortised across all of the applications in the cloud. The queue gets 
> cleaned up correctly when the application is removed, since it is tied into 
> the application definition.
> 
> * The application needs a database. Rather than spinning one up in a 
> storage-backed container and dealing with the overhead of managing it, the 
> application requests a Trove DB from an OpenStack cloud.
> 
> * The application includes a service that needs to run on bare metal for 
> performance reasons (e.g. could also be a database). The application requests 
> a bare-metal server from Nova w/ Ironic for the purpose. (The same applies to 
> requesting a VM, but there are alternatives like KubeVirt - which also 
> operates through the Service Catalog - available for getting a VM in 
> Kubernetes. There are no non-proprietary alternatives for getting a 
> bare-metal server.)
> 
> AWS[3], Azure[4], and GCP[5] all have service brokers available that support 
> these and many more services that they provide. I don't know of any reason in 
> principle not to expose every type of resource that OpenStack provides via a 
> service broker.
> 
> How is this different from cloud-provider-openstack?
> 
> 
> The Cloud Controller[6] interface in Kubernetes allows Kubernetes itself to 
> access features of the cloud to provide its service. For example, if k8s 
> needs persistent storage for a container then it can request that from Cinder 
> through cloud-provider-openstack[7]. It can also request a load balancer from 
> Octavia instead of having to start a container running HAProxy to load 
> balance between multiple instances of an application container (thus enabling 
> use of hardware load balancers via the cloud's abstraction for them).
> 
> 

[openstack-dev] [k8s] OpenStack and Containers White Paper

2018-04-02 Thread Chris Hoge
Hi everyone,

In advance of the Vancouver Summit, I'm leading an effort to publish a
community produced white-paper on OpenStack and container integrations.
This has come out of a need to develop materials, both short and long
form, to help explain how OpenStack interacts with container
technologies across the entire stack, from infrastructure to
application. The rough outline of the white-paper proposes an entire
technology stack and discuss deployment and usage strategies at every
level. The white-paper will focus on existing technologies, and how they
are being used in production today across our community. Beginning at
the hardware layer, we have the following outline (which may be inverted
for clarity):

* OpenStack Ironic for managing bare metal deployments.
* Container-based deployment tools for installing and managing OpenStack
   * Kolla containers and Kolla-Ansible
   * Loci containers and OpenStack Helm
* OpenStack-hosted APIs for managing container application
  infrastructure.
   * Magnum
   * Zun
* Community-driven integration of Kubernetes and OpenStack with K8s
  Cloud Provider OpenStack
* Projects that can stand alone in integrations with Kubernetes and
  other cloud technology
   * Cinder
   * Neutron with Kuryr and Calico integrations
   * Keystone authentication and authorization

I'm looking for volunteers to help produce the content for these sections
(and any others we may uncover to be useful) for presenting a complete
picture of OpenStack and container integrations. If you're involved with
one of these projects, or are using any of these tools in
production, it would be fantastic to get your input in producing the
appropriate section. We especially want real-world deployments to use as
small case studies to inform the work.

During the process of creating the white-paper, we will be working with a
technical writer and the Foundation design team to produce a document that
is consistent in voice, has accurate and informative graphics that
can be used to illustrate the major points and themes of the white-paper,
and that can be used as stand-alone media for conferences and
presentations.

Over the next week, I'll be reaching out to individuals and inviting them
to collaborate. This is also a general invitation to collaborate, and if
you'd like to help out with a section please reach out to me here, on the
K8s #sig-openstack Slack channel, or at my work e-mail, ch...@openstack.org.
Starting next week, we'll work out a schedule for producing and delivering
the white-paper by the Vancouver Summit. We are very short on time, so
we will have to be focused to quickly produce high-quality content.

Thanks in advance to everyone who participates in writing this
document. I'm looking forward to working with you in the coming weeks to
publish this important resource for clearly describing the multitude of
interactions between these complementary technologies.

-Chris Hoge
K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Chris Hoge

> On Mar 16, 2018, at 7:40 AM, Simon Leinen  wrote:
> 
> Joe Topjian writes:
>> Terraform hat! I want to slightly nit-pick this one since the words
>> "leak" and "admin-priv" can sound scary: Terraform technically wasn't
>> doing anything wrong. The problem was that Octavia was creating
>> resources but not setting ownership to the tenant. When it came time
>> to delete the resources, Octavia was correctly refusing, though it
>> incorrectly created said resources.
> 
> I dunno... if Octavia created those lower-layer resources on behalf of
> the user, then Octavia shouldn't refuse to remove those resources when
> the same user later asks it to - independent of what ownership Octavia
> chose to apply to those resources.  (It would be different it Neutron or
> Nova were asked by the user directly to remove the resources created by
> Octavia.)
> 
>> From reviewing the discussion, other parties were discovering this
>> issue and patching in parallel to your discovery. Both xgerman and
>> Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost
>> quickly applied the patch. It was a really awesome collaboration
>> between yourself, dims, xgerman, and Vexxhost.
> 
> Speaking as another operator: Does anyone seriously expect us to deploy
> a service (Octavia) in production at a stage where it exhibits this kind
> of behavior? Having to clean up leftover resources because the users who
> created them cannot remove them is not my idea of fun.  (And note that
> like most operators, we're a few releases behind, so we might not even
> get access to backports IF this gets fixed.)

Simon and Joe, one thing that I was not clear on (again, goes back to the
statement that mistakes I make are my own), is that this is behavior,
admin-scoped resources being created then not released, was seen in the Neutron
LBaaSv2 service. The fix _was_ to deploy Octavia and not use the Neutron API.
As such, I'm reluctant to use Terraform (or really, any other SDK) to deploy
load balancers against the Neutron API. I don't want to be leaking a bunch of
resources I can't delete. It's not good for the apps I’m trying to run and it’s
definitely not good for the cloud provider. I have much more confidence 
developing
against the Octavia service.

We figured this out as a group effort between Vexxhost, Joe, and the Octavia
team, and I'm exceptionally grateful to all of them for helping me to sort
those issues out.

Now, I ultimately dropped it in my own code because I can't rely on the
existence of Octavia across all clouds. It had nothing to do with the either
the reliability of the GopherCloud/Terraform SDKs or Octavia itself.

So, to repeat, leaking admin-scoped resources is a Neutron LBaaSv2 bug,
not an Octavia bug.

> In our case we're not a compute-oriented cloud provider, and some of our
> customers would really like to have a good LBaaS as part of our IaaS
> offering.  But our experience with this was so-so in the past - for
> example, we had to help customers migrate from LBaaSv1 to LBaaSv2.  Our
> resources (people, tolerance to user-affecting bugs and forced upgrades
> etc.) are limited, so we've become careful.
> 
> For users who want to use Kubernetes on our OpenStack service, we rather
> point them to Kubernetes's Ingress controller, which performs the LB
> function without requiring much from the underlying cloud.  Seems like a
> fine solution.
> -- 
> Simon.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-15 Thread Chris Hoge
As I've been working more in the Kubernetes community, I've been evaluating the
different points of integration between OpenStack services and the Kubernetes
application platform. One of the weaker points of integration has been in using
the OpenStack LBaaS APIs to create load balancers for Kubernetes applications.
Using this as a framing device, I'd like to begin a discussion about the
general development, deployment, and usage of the LBaaS API and how different
parts of our community can rally around and strengthen the API in the coming
year.

I'd like to note right from the beginning that this isn't a disparagement of
the fantastic work that's being done by the Octavia team, but rather an
evaluation of the current state of the API and a call to our rich community of
developers, cloud deployers, users, and app developers to help move the API to
a place where it is expected to be present and shows the same level of
consistency across deployments that we see with the Nova, Cinder, and Neutron
core APIs. The seed of this discussion comes from my efforts to enable
third-party Kubernetes cloud provider testing, as well as discussions with the
Kubernetes-SIG-OpenStack community in the #sig-openstack Slack channel in the
Kubernetes organization[0]. As a full disclaimer, my recounting of this 
discussion
represents my own impressions, and although I mention active participants by
name I do not represent their views. Any mistakes I make are my own.

To set the stage, Kubernetes uses a third-party load-balancer service (either
from a Kubernetes hosted application or from a cloud-provider API) to
provide high-availability for the applications it manages. The OpenStack
provider offers a generic interface to the LBaaSv2, with an option to enable
Octavia instead of the Neutron API. The provider is build off of the
GopherCloud SDK. In my own efforts to enable testing of this provider, I'm
using Terraform to orchestrate the K8s deployment and installation. Since I
needed to use a public cloud provider to turn this automated testing over to a
third party, I chose Vexxhost, as they have been generous donors in this effort
for the the CloudLab efforts in general, and have provided tremendous support
in debugging problems I've run in to. The first major issue I ran in to was a
race condition in using the Neutron LBaaSv2 API. It turns out that with
Terraform, it's possible to tear down resources in a way that causes Neutron to
leak administrator-privileged resources that can not be deleted by a
non-privileged users. In discussions with the Neutron and Octavia teams, it was
strongly recommended that I move away from the Neutron LBaaSv2 API and instead
adopt Octavia. Vexxhost graciously installed Octavia and my request and I was
able to move past this issue.

This raises a fundamental issue facing our community with regards to the load
balancer APIs: there is little consistency as to which API is deployed, and we
have installations that still deploy on the LBaaSv1 API. Indeed, the OpenStack
User Survey reported in November of 2017 that only 7% of production
installations were running Octavia[1]. Meanwhile, Neutron LBaaSv1 was deprecated
in Liberty, and Neutron LBaaSv2 was recently deprecated in the Queens release.
The lack of a migration path from v1 to v2 helped to slow adoption, and the
additional requirements for installing Octavia has also been a factor in
increasing adoption of the supported LBaaSv2 implementation.

This highlights the first call to action for our public and private cloud
community: encouraging the rapid migration from older, unsupported APIs to
Octavia.

Because of this wide range of deployed APIs, I changed my own deployment code
to launch a user-space VM and install a non-tls-terminating Nginx load balancer
for my Kubernetes control plane[2]. I'm not the only person who has adopted an 
approach like this. In the #sig-openstack channel, Saverio Proto (zioproto)
discussed how he uses the K8s Nginx ingress load balancer[3] in favor of the
OpenStack provider load balancer. My take away from his description is that it's
preferable to use the K8s-based ingress load balancer because:

* The common LBaaSv2 API does not support TLS termination.
* You don't need provision an additional virtual machine.
* You aren't dependent on an appropriate and supported API being available on
  your cloud.

German Eichberger (xgerman) and Adam Harwell (rm_you) from the Octavia team
were present for the discussion, and presented a strong case for using the
Octavia APIs. My take away was:

* Octavia does support TLS termination, and it's the dependence on the 
  Neutron API that removes the ability to take advantage of it.
* It provides a lot more than just a "VM with haproxy", and has stability
  guarantees.

This highlights a second call to action for the SDK and provider developers:
recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
support for more advanced Octavia features.

As part of 

[openstack-dev] [k8s] Hosting location for OpenStack Kubernetes Provider

2018-03-13 Thread Chris Hoge
At the PTG in Dublin, SIG-K8s started working towards migrating the
external Kubernetes OpenStack cloud provider[1] work to be an OpenStack
project. Coincident with that, an upstream patch[2] was proposed by 
WG-Cloud-Provider to create upstream Kubernetes repositories for the
various cloud providers.

I want to begin a conversation about where we want this provider code to
live and how we want to manage it. Three main options are to:

1) Host the provider code within the OpenStack ecosystem. The advantages
are that we can follow OpenStack community development practices, and
we have a good list of people signed up to help maintain it. We would
also have easier access to infra test resources. The downside is we pull
the code further away from the Kubernetes community, possibly making it
more difficult for end users to find and use in a way that is consistent
with other external providers.

2) Host the provider code within the Kubernetes ecosystem. The advantage
is that the code will be in a well-defined and well-known place, and
members of the Kubernetes community who want to participate will be able
to continue to use the community practices. We would still be able to
take advantage of infra resources, but it would require more setup to
trigger and report on jobs.

3) Host in OpenStack, and mirror in a Kubernetes repository. We would
need to work with the K8s team to make sure this is an acceptable option,
but would allow for a hybrid development model that could satisty the
needs of members of both communities. This would require a committment
from the K8s-SIG-OpenStack/OpenStack-SIG-K8s team to handling tickets
and pull requests that come in to the Kubernetes hosted repository.

My personal opinion is that we should take advantage of the Kubernetes
hosting, and migrate the project to one of the repositories listed in
the WG-Cloud-Provider patch. This wouldn't preclude moving it into
OpenStack infra hosting at some point in the future and possibly
adopting the hybrid approach down the line after more communication with
K8s infrastructure leaders.

There is a sense of urgency, as Dims has asked that we relieve him of
the responsibility of hosing the external provider work in his personal
GitHub repository.

Please chime in with your opinions on this here so that we can work out
an where the appropriate hosting for this project should be.

Thanks,
Chris Hoge
K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead

[1] https://github.com/dims/openstack-cloud-controller-manager
[2] https://github.com/kubernetes/community/pull/1862
[3] https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [loci] Removing deprecated project-specific Loci repositories

2018-02-27 Thread Chris Hoge
On October 17, 2017, the Loci team retired the project-specific
Loci repositories in favor of a single repository. This was done to
consolidate development and prevent the anti-pattern of one repository
with duplicated code for every OpenStack project.

After this five month deprecation period, in which we have provided no
support for those repositories, and with all development focused on the
primary Loci repository, we are officially requesting[1] that the project
specific repositories be removed from OpenStack infra hosting.

* Loci has no requirements synching
* The project-specific repositories have no project gating.
* We have zeroed out the project-specific repositories

If you're interested in Loci, the primary repository and project
remains active, and we encourage your use and contributions.[2]

[1] https://review.openstack.org/#/c/548268/
[2] https://git.openstack.org/cgit/openstack/loci/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][ptg] SIG-K8s Scheduling for Dublin PTG

2018-02-26 Thread Chris Hoge
Initial scheduling is live for sig-k8s work at the PTG. Tuesday morning is
going to be devoted to external provider migration and documentation.
Late morning includes a Kolla sesison. The afternoon is mostly free, with
a session set aside for testing. If you have topics you'd like to have
sessions on please add them to the schedule. If you’re working on k8s
within the OpenStack community, there is a team photo at scheduled
for 3:30.

https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg

Chris

> On Feb 21, 2018, at 7:41 PM, Chris Hoge  wrote:
> 
> SIG-K8s has a planning etherpad available for the Dublin PTG. We have
> space scheduled for Tuesday, with approximately eight forty-minute work
> blocks. For the K8s on OpenStack side of things, we've identified a core
> set of priorities that we'll be working on that day, including:
> 
> * Moving openstack-cloud-controller-manager into OpenStack git repo.
> * Enabling and improving testing across multiple platforms.
> * Identifying documentation gaps.
> 
> Some of these items have some collaboration points with the Infra and
> QA teams. If members of those teams could help us identify when they
> would be available to work on repository creation and enabling testing,
> that would help us to schedule the appropriate times for those topics.
> 
> The work of the SIG-K8s groups also covers other Kubernetes and OpenStack
> integrations, including deploying OpenStack on top of Kubernetes. If
> anyone from the Kolla, OpenStack-Helm, Loci, Magnum, Kuryr, or Zun
> teams would like to schedule cross-project work sessions, please add your
> requests and preferred times to the planning etherpad. Additionally, I
> can be available to attend work sessions for any of those projects.
> 
> https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg
> 
> Thanks!
> Chris
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s] SIG-K8s Scheduling for Dublin PTG

2018-02-21 Thread Chris Hoge
SIG-K8s has a planning etherpad available for the Dublin PTG. We have
space scheduled for Tuesday, with approximately eight forty-minute work
blocks. For the K8s on OpenStack side of things, we've identified a core
set of priorities that we'll be working on that day, including:

* Moving openstack-cloud-controller-manager into OpenStack git repo.
* Enabling and improving testing across multiple platforms.
* Identifying documentation gaps.

Some of these items have some collaboration points with the Infra and
QA teams. If members of those teams could help us identify when they
would be available to work on repository creation and enabling testing,
that would help us to schedule the appropriate times for those topics.

The work of the SIG-K8s groups also covers other Kubernetes and OpenStack
integrations, including deploying OpenStack on top of Kubernetes. If
anyone from the Kolla, OpenStack-Helm, Loci, Magnum, Kuryr, or Zun
teams would like to schedule cross-project work sessions, please add your
requests and preferred times to the planning etherpad. Additionally, I
can be available to attend work sessions for any of those projects.

https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg

Thanks!
Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s] openstack-sig-k8s planning for Dublin

2018-02-07 Thread Chris Hoge
sig-k8s has a block of room time put aside for the Dublin PTG. I’ve set
up a planning etherpad for work and discussion topics[1]. High priority
items include:

* openstack provider breakout [2]
* provider testing
* documentation updates

Please feel free to add relevant agenda items, links, and discussion
topics.

We will begin some pre-planning today at the k8s-sig-openstack meeting,
taking place at 00 UTC Thursday [3] (Wednesday afternoon/evening for North
America, morning for Asia/Pacific region).

Thanks,
Chris

[1] https://etherpad.openstack.org/p/sig-k8s-2018-dublin-ptg
[2] https://github.com/dims/openstack-cloud-controller-manager
[3] https://github.com/kubernetes/community/tree/master/sig-openstack#meetings


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack][ptl] PTL Candidacy for Rocky

2018-02-01 Thread Chris Hoge
I am submitting my self nomination to serve as the RefStack PTL for 
the Rocky development cycle. For the Rocky cycle, I will continue
to focus efforts on moving the RefStack Server and Client into
maintenance mode. Outstanding tasks include:

  * Adding funtionality to upload subunit data for test results.
  * Adding Tempest autoconfiguration to the client.
  * Updating library dependencies.
  * Providing consistent API documentation.

In the previous cycle, the Tempest Autoconfig project was added to
RefStack governance. Another goal of the Rocky cycle is to transition
project leadership to the Tempest Autoconfig team, as this project is
where the majority of future work is going to happen.

Thank you,

Chris Hoge

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] October 17, 2017 RefStack meeting cancelled

2017-10-16 Thread Chris Hoge
The October 17, 2017 RefStack meeting is cancelled. Our next team meeting
will be held on October 24, 2017.

-Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][deployment][kolla-kubernetes][magnum][kuryr][zun][qa][api] Proposal for SIG-K8s

2017-09-28 Thread Chris Hoge

> On Sep 18, 2017, at 12:54 PM, Hongbin Lu  wrote:
> 
> Hi Chris,
>  
> Sorry I missed the meeting since I was not in PTG last week. After a quick 
> research on the mission of SIG-K8s, I think we (the OpenStack Zun team) have 
> an item that fits well into this SIG, which is the k8s connector feature:
>  
>   https://blueprints.launchpad.net/zun/+spec/zun-connector-for-k8s 
> <https://blueprints.launchpad.net/zun/+spec/zun-connector-for-k8s>
>  
> I added it to the etherpad and hope it will be well accepted by the SIG.

Of course it is welcome and accepted. Given the length of the subject line
calling out groups, I propose to just shorten the tag related to sig-k8s to 
just use
the tag [k8s]. This will make the subject line more meaningful in 
conveying the intent of the message, and every team and person that
participates in the sig has a simple catch-all tag to search against.

My intention was never to make any individual or team feel excluded. I
apologize if my oversight was read in any other way. Going forward
this simplification should be read as implying the inclusion of all teams
and individuals working with Kubernetes in OpenStack.

Sincerely,
-Chris

>  
> Best regards,
> Hongbin
>  
> From: Chris Hoge [mailto:ch...@openstack.org] 
> Sent: September-15-17 12:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] 
> [k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for 
> SIG-K8s
>  
> Link to the etherpad for the upcoming meeting.
>  
> https://etherpad.openstack.org/p/queens-ptg-sig-k8s 
> <https://etherpad.openstack.org/p/queens-ptg-sig-k8s>
>  
>  
> On Sep 14, 2017, at 10:23 AM, Chris Hoge  <mailto:ch...@openstack.org>> wrote:
>  
> This Friday, September 15 at the PTG we will be hosting an organizational
> meeting for SIG-K8s. More information on the proposal, meeting time, and
> remote attendance is in the openstack-sigs mailing list [1].
> 
> Thanks,
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html
>  
> <http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] RefStack Meeting time change

2017-09-28 Thread Chris Hoge
At the previous RefStack meeting, the team unanimously decided to move
our weekly meeting from Tuesdays at 19:00 UTC to Tuesdays at 17:00 UTC in
#openstack-meeting-alt. [1][2]

Thanks
Chris

[1] 
http://eavesdrop.openstack.org/meetings/refstack/2017/refstack.2017-09-26-19.00.log.html#l-58
[2] https://review.openstack.org/#/c/508202
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for SIG-K8s

2017-09-21 Thread Chris Hoge
In preparation for the Sydney Forum, I’ve created a brainstorming
etherpad for forum topics. Please contribute topics as you see fit,
and from there we can start making some submissions.

https://etherpad.openstack.org/p/sig-k8s-sydney-forum-topics 
<https://etherpad.openstack.org/p/sig-k8s-sydney-forum-topics>
https://wiki.openstack.org/wiki/Forum/Sydney2017 
<https://wiki.openstack.org/wiki/Forum/Sydney2017>

PTG got me a little behind on this.

Thanks,
Chris

> Begin forwarded message:
> 
> From: Chris Hoge 
> Subject: [openstack-dev] 
> [k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for 
> SIG-K8s
> Date: September 14, 2017 at 9:23:22 AM PDT
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Reply-To: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> 
> This Friday, September 15 at the PTG we will be hosting an organizational
> meeting for SIG-K8s. More information on the proposal, meeting time, and
> remote attendance is in the openstack-sigs mailing list [1].
> 
> Thanks,
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for SIG-K8s

2017-09-15 Thread Chris Hoge
Link to the etherpad for the upcoming meeting.

https://etherpad.openstack.org/p/queens-ptg-sig-k8s 
<https://etherpad.openstack.org/p/queens-ptg-sig-k8s>


> On Sep 14, 2017, at 10:23 AM, Chris Hoge  wrote:
> 
> This Friday, September 15 at the PTG we will be hosting an organizational
> meeting for SIG-K8s. More information on the proposal, meeting time, and
> remote attendance is in the openstack-sigs mailing list [1].
> 
> Thanks,
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for SIG-K8s

2017-09-14 Thread Chris Hoge
This Friday, September 15 at the PTG we will be hosting an organizational
meeting for SIG-K8s. More information on the proposal, meeting time, and
remote attendance is in the openstack-sigs mailing list [1].

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation

[1] 
http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][interop][refstack][all][ironic][cinder]RefStack and Interop WG PTG Agenda

2017-09-06 Thread Chris Hoge

> On Sep 4, 2017, at 1:04 AM, Dmitry Tantsur  wrote:
> 
> On 09/01/2017 07:07 PM, Chris Hoge wrote:
>> The RefStack and Interop WG teams will host a small work room on Monday
>> and Tuesday at the PTG. We would like for projects interested in the
>> interop guideline expansion to participate in guiding the development of
>> future guidelines. The draft schedule focuses on Interop work for Monday,
>> and RefStack work for Tuesday.
>> The Interop WG work will have a general session for future planning and
>> guideline work. We will also have sessions targeted towards new vertical
>> programs, with a focus on NFV. We would also like to invite
>> representatives from projects that can be installed as standalone
>> services, like Cinder and Ironic, to discuss the creation of vertical
>> programs to test for standalone interoperability of their services. In
>> another session covering extension programs, we would like to invite
>> representatives from the Designate and Heat projects to attend to discuss
>> the plans for the two proposed extension programs[1][2]. Any other
>> projects interested in building an extension program are invited to
>> attend as well.
> 
> Re "Verticals: Cinder, Ironic, Swift discussion", I'm very interested to 
> come. Can we please schedule a specific slot for it? We will be participating 
> in a Kolla discussion with Cinder folks at 4pm, maybe after that?

I’m building out the schedule in the planning etherpad right now. Will
the time slot earlier in the day work for you? There's also time later
in the day. For anyone who wants to attend a session, please add
your name and if a different time works better for you.

https://etherpad.openstack.org/p/InteropDenver2017PTG

-Chris


> Adding tags to get more attention.
> 
>> On Tuesday the RefStack team will meet to discuss general planning, and
>> new features such as refstack-client auto configuration, and secure
>> uploading and storage of subunit test results to the RefStack server.
>> Members of adjacent communities such as NFV that are using RefStack for
>> their own interoperability programs are encoraged to attend as well.
>> If you're interested in attending any of the sessions, please add your
>> name and availability to the agenda[3]. We will also accomodate remote
>> attendance for anyone who can't make it to Denver in person. If the
>> scheduling doesn't work out for you, I will be at the PTG all week and am
>> available to drop into any other projects.
>> Chris
>> [1] https://review.openstack.org/#/c/490648/ [2]
>> https://review.openstack.org/#/c/492635/ [3]
>> https://etherpad.openstack.org/p/InteropDenver2017PTG
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg][interop][refstack][all]RefStack and Interop WG PTG Agenda

2017-09-01 Thread Chris Hoge
The RefStack and Interop WG teams will host a small work room on Monday
and Tuesday at the PTG. We would like for projects interested in the
interop guideline expansion to participate in guiding the development of
future guidelines. The draft schedule focuses on Interop work for Monday,
and RefStack work for Tuesday.

The Interop WG work will have a general session for future planning and
guideline work. We will also have sessions targeted towards new vertical
programs, with a focus on NFV. We would also like to invite
representatives from projects that can be installed as standalone
services, like Cinder and Ironic, to discuss the creation of vertical
programs to test for standalone interoperability of their services. In
another session covering extension programs, we would like to invite
representatives from the Designate and Heat projects to attend to discuss
the plans for the two proposed extension programs[1][2]. Any other
projects interested in building an extension program are invited to
attend as well.

On Tuesday the RefStack team will meet to discuss general planning, and
new features such as refstack-client auto configuration, and secure
uploading and storage of subunit test results to the RefStack server.
Members of adjacent communities such as NFV that are using RefStack for
their own interoperability programs are encoraged to attend as well.

If you're interested in attending any of the sessions, please add your
name and availability to the agenda[3]. We will also accomodate remote
attendance for anyone who can't make it to Denver in person. If the
scheduling doesn't work out for you, I will be at the PTG all week and am
available to drop into any other projects. 

Chris

[1] https://review.openstack.org/#/c/490648/ [2]
https://review.openstack.org/#/c/492635/ [3]
https://etherpad.openstack.org/p/InteropDenver2017PTG


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][interop][heat][designate] Proposed updates to OpenStack Powered trademarks: extensions and verticals

2017-08-10 Thread Chris Hoge

> On Aug 10, 2017, at 2:47 PM, Chris Hoge  wrote:
> 
> At the upcoming board meeting in September, the Interop Working Group
> will be proposing a new trademark program to supplement the OpenStack
> Powered mark. This update formally defines two distinct types of programs.
> 
> 1) Platforms. This captures the three existing trademarks, OpenStack
> Powered Compute, OpenStack Powered Storage, and OpenStack Powered
> Platform. A platform can be thought of as a complete collection of
> OpenStack software to give a core set of functionality. For example,
> OpenStack Powered Storage provides Swift Object Storage and Horizon

Correction to above:
Swift and Keystone. Horizon is not required


> Identity. Compute offers Nova, Horizon, Glance, Cinder and Neutron.

Correction to the above:
Nova, Keystone, Glance, Cinder, and Neutron. Horizon is not required.


> 
> We are generalizing the idea of platforms to be able to capture other
> verticals within the OpenStack ecosystem. For example, we are currently
> working with NFV leaders to potentially build out an OpenStack Powered
> NFV guideline that could be used in a future trademark program.
> 
> 2) Extensions. This captures projects that provide additional
> functionality to platforms, but require certain core services to be
> available.  The intent is for an OpenStack Powered cloud to be able to
> advertise interoperable capabilities that would be nice for users to
> have but aren't strictly required for general interoperability. The
> first two extensions we are focusing on are Heat Orchestration and
> Designate DNS. If a public cloud were offering the Designate API,
> they could qualify to present themselves as "OpenStack Powered Platform
> with DNS".
> 
> We are seeking advisory status from the board at the September board
> meeting, with a goal to launch the new extension programs after the
> January board meeting. The Interop Working Group would also like to
> work with the TC on encouraging more projects to adopt the Interop
> Working Group schema to define what public-facing interfaces and code
> should be present for a deployed instance of that project to qualify
> as interoperable.
> 
> If you would like to see the new extension programs, I have reviews
> up for both Heat and Designate.
> 
> Heat: https://review.openstack.org/#/c/490648/
> Designate: https://review.openstack.org/#/c/492635/
> 
> The new interop guideline schema format is also ready to be presented
> to the board:
> 
> 2.0 schema documentation:
>  
> https://git.openstack.org/cgit/openstack/interop/tree/doc/source/schema/2.0.rst
> 2.0 schema example:
>  
> https://git.openstack.org/cgit/openstack/interop/tree/doc/source/schema/next.2.0.json
> 
> The review for the 2.0 schema (merged):
>  https://review.openstack.org/#/c/430556/
> 
> If you are the PTL of a project that would like to be considered for an
> extension trademark program, please don't hesitate to reach out to me or
> any other member of the Interop Working Group.
> 
> We're pretty excited about how we're planning on extending the trademark
> program next year, and are looking forward to working with the developer
> community to help guarantee the interoperability of OpenStack clouds
> through testing and trademark compliance.
> 
> Thanks!
> 
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][interop][heat][designate] Proposed updates to OpenStack Powered trademarks: extensions and verticals

2017-08-10 Thread Chris Hoge
At the upcoming board meeting in September, the Interop Working Group
will be proposing a new trademark program to supplement the OpenStack
Powered mark. This update formally defines two distinct types of programs.

1) Platforms. This captures the three existing trademarks, OpenStack
Powered Compute, OpenStack Powered Storage, and OpenStack Powered
Platform. A platform can be thought of as a complete collection of
OpenStack software to give a core set of functionality. For example,
OpenStack Powered Storage provides Swift Object Storage and Horizon
Identity. Compute offers Nova, Horizon, Glance, Cinder and Neutron.

We are generalizing the idea of platforms to be able to capture other
verticals within the OpenStack ecosystem. For example, we are currently
working with NFV leaders to potentially build out an OpenStack Powered
NFV guideline that could be used in a future trademark program.

2) Extensions. This captures projects that provide additional
functionality to platforms, but require certain core services to be
available.  The intent is for an OpenStack Powered cloud to be able to
advertise interoperable capabilities that would be nice for users to
have but aren't strictly required for general interoperability. The
first two extensions we are focusing on are Heat Orchestration and
Designate DNS. If a public cloud were offering the Designate API,
they could qualify to present themselves as "OpenStack Powered Platform
with DNS".

We are seeking advisory status from the board at the September board
meeting, with a goal to launch the new extension programs after the
January board meeting. The Interop Working Group would also like to
work with the TC on encouraging more projects to adopt the Interop
Working Group schema to define what public-facing interfaces and code
should be present for a deployed instance of that project to qualify
as interoperable.

If you would like to see the new extension programs, I have reviews
up for both Heat and Designate.

Heat: https://review.openstack.org/#/c/490648/
Designate: https://review.openstack.org/#/c/492635/

The new interop guideline schema format is also ready to be presented
to the board:

2.0 schema documentation:
  
https://git.openstack.org/cgit/openstack/interop/tree/doc/source/schema/2.0.rst
2.0 schema example:
  
https://git.openstack.org/cgit/openstack/interop/tree/doc/source/schema/next.2.0.json

The review for the 2.0 schema (merged):
  https://review.openstack.org/#/c/430556/

If you are the PTL of a project that would like to be considered for an
extension trademark program, please don't hesitate to reach out to me or
any other member of the Interop Working Group.

We're pretty excited about how we're planning on extending the trademark
program next year, and are looking forward to working with the developer
community to help guarantee the interoperability of OpenStack clouds
through testing and trademark compliance.

Thanks!

Chris Hoge
Interop Engineer
OpenStack Foundation


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack][interop-wg] Candidacy for RefStack PTL

2017-08-08 Thread Chris Hoge
Catherine, 

Thank you for the amazing work you’ve done over the last two years. As we 
discussed
in the RefStack meeting today, I’ve submitted my candidacy for RefStack PTL to 
the
election repository. I will consider myself lucky to be even a fraction of the 
leader you
were for this project. Thank you.

-Chris

—

I am submitting my self nomination to serve as the RefStack PTL for
the Queens development cycle. RefStack is in a period of transition,
with much of the core team being assigned to other projects, including
the long-standing PTL. While I have few contributions to the project,
I have been an active participant and advisor to the project since I
began working with the Interop Working Group (formerly DefCore).

For the Queens cycle, I will focus on rebuilding a core team of
developers, and looking to the future of RefStack as an OpenStack
project and how it fits into the larger ecosystem. With many of the
features of RefStack complete, we will investigate the possibility
of moving the project into a mainteance state under the governance of
the Interop Working Group.

Thank you,

Chris Hoge
Interop Engineer
OpenStack Foundation

> On Aug 2, 2017, at 12:15 PM, Catherine Cuong Diep  wrote:
> 
> Hi Everyone,
> 
> As I had announced in the RefStack IRC meeting a few weeks ago, I will not 
> run for RefStack PTL in the upcoming cycle. I have been PTL for the last 2 
> years and it is time to pass the torch to a new leader.
> 
> I would like to thanks everyone for your support and contribution to make the 
> RefStack project and interoperability testing a reality. We would not be 
> where we are today without your
> commitment and dedication.
> 
> I will still be around to help the project and to work with the next PTL for 
> a smooth transition. 
> 
> Catherine Diep
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Chris Hoge

> On Jun 21, 2017, at 2:35 PM, Jeremy Stanley  wrote:
> 
> On 2017-06-21 13:52:11 -0500 (-0500), Lauren Sell wrote:
> [...]
>> To make this actionable...Github is just a mirror of our
>> repositories, but for better or worse it's the way most people in
>> the world explore software. If you look at OpenStack on Github
>> now, it’s impossible to tell which projects are official. Maybe we
>> could help by better curating the Github projects (pinning some of
>> the top projects, using the new new topics feature to put tags
>> like openstack-official or openstack-unofficial, coming up with
>> more standard descriptions or naming, etc.).
> 
> I hadn't noticed the pinned repositories option until you mentioned
> it: appears they just extended that feature to orgs back in October
> (and introduced the topics feature in January). I could see
> potentially integrating pinning and topic management into the
> current GH API script we run when creating new mirrors
> there--assuming these are accessible via their API anyway--and yes
> normalizing the descriptions to something less freeform is something
> else we'd discussed to be able to drive users back to the official
> locations for repositories (or perhaps to the project navigator).
> 
> I've already made recent attempts to clarify our use of GH in the
> org descriptions and linked the openstack org back to the project
> navigator too, since those were easy enough to do right off the bat.
> 
>> Same goes for our repos…if there’s a way we could differentiate
>> between official and unofficial projects on this page it would be
>> really useful: https://git.openstack.org/cgit/openstack/
> 
> I have an idea as to how to go about that by generating custom
> indices rather than relying on the default one cgit provides; I'll
> mull it over.
> 
>> 2) Create a simple structure within the official set of projects
>> to provide focus and a place to get started. The challenge (again
>> to our success, and lots of great work by the community) is that
>> even the official project set is too big for most people to
>> follow.
> 
> This is one of my biggest concerns as well where high-cost (in the
> sense of increasingly valuable Infra team member time) solutions are
> being tossed around to solve the "what's official?" dilemma, while
> not taking into account that the overwhelming majority of active Git
> repositories we're hosting _are_ already deliverables for official
> teams. I strongly doubt that just labelling the minority as
> unofficial will any any way lessen the overall confusion about the
> *more than one thousand* official Git repositories we're
> maintaining.

Another instance where the horse is out of the barn, but this
is one of the reasons why I don’t like it when config-management
style efforts are organized as one-to-one mapping of repositories
to corresponding project. It created massive sprawl
within the ecosystem, limited opportunities for code sharing,
and made refactoring a nightmare. I lost count of the number
of times we submitted n inconsistent patches to change
similar behavior across n+1 projects. Trying to build a library
helped but was never as powerful as being able to target a
single repository.

>> While I fully admit it was an imperfect system, the three tier
>> delineation of “integrated," “incubated" and “stackforge" was
>> something folks could follow pretty easily. The tagging and
>> mapping is valuable and provides additional detail, but having the
>> three clear buckets is ideal.  I would like to see us adopt a
>> similar system, even if the names change (i.e. core infrastructure
>> services, optional services, stackforge). Happy to throw out ideas
>> if there is interest.
> [...]
> 
> Nearly none (almost certainly only a single-digit percentage anyway)
> of the Git repositories we host are themselves source code for
> persistent network services. We have lots of tools, reusable
> libraries, documentation, meta-documentation, test harnesses,
> configuration management frameworks, plugins... we probably need a
> way to reroute audiences who are not strictly interested in browsing
> source code itself so they stop looking at those Git repositories or
> else confusion is imminent regardless. As a community we do nearly
> _everything_ in Git, far beyond mere application and service
> software.
> 
> The other logical disconnect I'm seeing is that our governance is
> formed around teams, not around software. Trying to explain the
> software through the lens of governance is almost certain to confuse
> newcomers. Because we use one term (OpenStack!) for both the
> community of contributors and the software they produce, it's going
> to become very tangled in people's minds. I'm starting to strongly
> wish could use entirely different names for the community and the
> software, but that train has probably already sailed

Two points: 
1) Block That Metaphor!
2) You’ve convinced me that the existing tooling around our current
state is going to make it

Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-21 Thread Chris Hoge

> On Jun 21, 2017, at 9:20 AM, Clark Boylan  wrote:
> 
> On Wed, Jun 21, 2017, at 08:48 AM, Dmitry Tantsur wrote:
>> On 06/19/2017 05:42 PM, Chris Hoge wrote:
>>> 
>>> 
>>>> On Jun 15, 2017, at 5:57 AM, Thierry Carrez  wrote:
>>>> 
>>>> Sean Dague wrote:
>>>>> [...]
>>>>> I think those are all fine. The other term that popped into my head was
>>>>> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
>>>>> that aren't official projects. It may be too informal, but I do think
>>>>> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.
>>>> 
>>>> My original thinking was to call them "hosted projects" or "host
>>>> projects", but then it felt a bit incomplete. I kinda like the "Friends
>>>> of OpenStack" name, although it seems to imply some kind of vetting that
>>>> we don't actually do.
>>> 
>>> Why not bring back the name Stackforge and apply that
>>> to unofficial projects? It’s short, descriptive, and unambiguous.
>> 
>> Just keep in mind that people always looked at stackforge projects as
>> "immature 
>> experimental projects". I remember getting questions "when is
>> ironic-inspector 
>> going to become a real project" because of our stackforge prefix back
>> then, even 
>> though it was already used in production.
> 
> A few days ago I suggested a variant of Thierry's suggestion below. Get
> rid of the 'openstack' prefix entirely for hosting and use stackforge
> for everything. Then officially governed OpenStack projects are hosted
> just like any other project within infra under the stackforge (or Opium)
> name. The problem with the current "flat" namespace is that OpenStack
> means something specific and we have overloaded it for hosting. But we
> could flip that upside down and host OpenStack within a different flat
> namespace that represented "project hosting using OpenStack infra
> tooling”.

I dunno. I understand that it’s extra work to have two namespaces,
but it sends a clear message. Approved TC, UC, and Board projects
remain under openstack, and unofficial move to a name that is not
openstack (i.e. stackforge/opium/etc).

As part of a branding exercise, it creates a clear, easy to
understand, and explain division.

For names like stackforge being considered a pejorative, we can
work as a community against that. I know that when I was helping run
the puppet modules under stackforge, I was proud of the work and
understood it to mean that it was a community supported, but not
official project. I was pretty sad when stackforge went away, precisely
because of the confusion we’re experiencing with ‘big tent’ today.


> The hosting location isn't meant to convey anything beyond the project
> is hosted on a Gerrit run by infra and tests are run by Zuul.
> stackforge/ is not an (anti)endorsement (and neither is openstack/).
> 
> Unfortunately, I expect that doing this will also result in a bunch of
> confusion around "why is OpenStack being renamed", "what is happening to
> OpenStack governance", etc.
> 
>>>> An alternative would be to give "the OpenStack project infrastructure"
>>>> some kind of a brand name (say, "Opium", for OpenStack project
>>>> infrastructure ultimate madness) and then call the hosted projects
>>>> "Opium projects". Rename the Infra team to Opium team, and voilà!
>>>> -- 
>>>> Thierry Carrez (ttx)
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-19 Thread Chris Hoge


> On Jun 15, 2017, at 5:57 AM, Thierry Carrez  wrote:
> 
> Sean Dague wrote:
>> [...]
>> I think those are all fine. The other term that popped into my head was
>> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
>> that aren't official projects. It may be too informal, but I do think
>> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.
> 
> My original thinking was to call them "hosted projects" or "host
> projects", but then it felt a bit incomplete. I kinda like the "Friends
> of OpenStack" name, although it seems to imply some kind of vetting that
> we don't actually do.

Why not bring back the name Stackforge and apply that
to unofficial projects? It’s short, descriptive, and unambiguous.

-Chris

> An alternative would be to give "the OpenStack project infrastructure"
> some kind of a brand name (say, "Opium", for OpenStack project
> infrastructure ultimate madness) and then call the hosted projects
> "Opium projects". Rename the Infra team to Opium team, and voilà!
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Affected by OSIC, Layoffs? Or want to help?

2017-04-21 Thread Chris Hoge
I would also like to donate a domestic round trip flight or an international 
one way.

-Chris

> On Apr 21, 2017, at 11:36 AM, Lauren Sell  wrote:
> 
> Hi everyone,
> 
> The Foundation wants to help any Stackers affected by recent layoffs such as 
> OSIC get to the Boston Summit. There are companies hiring and we want to 
> retain our important community members! 
> 
> If you are a contributor who was recently laid off and need help getting to 
> Boston, please contact me ASAP. We have a little bit of room left in our 
> travel support block, and want to extend rooms and free passes to those 
> affected to help if we can.  
> 
> Amy Marrich also had a great idea for any of you frequent flyers interested 
> in pitching in! Community members could offer up some of our personal 
> frequent flyer miles to sponsor flights for these Stackers. I’d love to be 
> the first...if you were laid off and need sponsorship for a flight, I’m 
> willing to sponsor a round trip domestic flight or one-way international 
> flight with my miles. Contact me. 
> 
> Anyone else want to pitch in?
> 
> Cheers,
> Lauren
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-04 Thread Chris Hoge

> On Apr 2, 2017, at 4:29 PM, Monty Taylor  wrote:
> 
> On 03/29/2017 03:39 PM, Steve Gordon wrote:
>> - Original Message -
>>> From: "Davanum Srinivas" 
>>> To: "Chris Hoge" 
>>> Cc: "OpenStack Development Mailing List (not for usage questions)" 
>>> ,
>>> "kubernetes-sig-openstack" 
>>> Sent: Wednesday, March 29, 2017 2:28:29 PM
>>> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
>>> Provider for Kubernetes
>>> 
>>> Team,
>>> 
>>> Repo is ready:
>>> http://git.openstack.org/cgit/openstack/k8s-cloud-provider
>>> 
>>> I've taken the liberty of updating it with the latest changes in the
>>> kubernetes/kubernetes repo:
>>> https://review.openstack.org/#/q/project:openstack/k8s-cloud-provider is
>>> ready
>>> 
>>> So logical next step would be to add CI jobs to test in OpenStack
>>> Infra. Anyone interested?
>> 
>> One question I have around this - do we have a shared view of what the ideal 
>> matrix of tested combinations would like? E.g. kubernetes master on 
>> openstack project's master, kubernetes master on openstack project's stable 
>> branches (where available), do we also need/want to test kubernetes stable 
>> milestones, etc.
>> 
>> At a high level my goal would be the same as Chris's "k8s gating on 
>> OpenStack in the same ways that it does on AWS and GCE." which would imply 
>> reporting results on PRs proposed to K8S master *before* they merge but not 
>> sure we all agree on what that actually means testing against in practice on 
>> the OpenStack side of the equation?
> 
> I think we want to have jobs that have the ability to test:
> 
> 1) A proposed change to k8s-openstack-provider against current master of
> OpenStack
> 2) A proposed change to k8s-openstack-provider against a stable release
> of OpenStack
> 3) A proposed change to OpenStack against current master of
> k8s-openstack-provider
> 4) A proposed change to OpenStack against stable release of
> k8s-openstack-provider
> 
> Those are all easy now that the code is in gerrit, and it's well defined
> what triggers and where it reports.
> 
> Additionally, we need to test the surface area between
> k8s-openstack-provider and k8s itself. (if we wind up needing to test
> k8s against proposed changes to OpenStack then we've likely done
> something wrong in life)
> 
> 5) A proposed change to k8s-openstack-provider against current master of k8s
> 6) A proposed change to k8s-openstack-provider against a stable release
> of k8s
> 7) A proposed change to k8s against current master of k8s-openstack-provider
> 8) A proposed change to k8s against stable release of k8s-openstack-provider
> 
> 5 and 6 are things we can do right now. 7 and 8 will have to wait for GH
> support to land in zuul (without which we can neither trigger test jobs
> on proposed changes to k8s nor can we report the results back to anyone)

7 and 8 are going to be pretty important for integrating into the K8S
release process. At the risk of having a work item thrown at me,
is there a target for when that feature will land?

It's not critical though, sorting out every other item is a pretty
cool set of initial tests.

Of note, e2e tests have some unreliability because of things like
hard sleeps[1]. It sounds like the K8S community is trying to address
these issues, but initially we should be expecting quite a few false
negatives (where negative means test failure).

[1] https://groups.google.com/forum/#!topic/kubernetes-sig-testing/a3XUvUVmxWU

> 
> I would recommend that we make 5 and 6 non-voting until such a time as
> we are reporting on 7 and 8 back to k8s and have a reasonable
> expectation someone will pay attention to failures - otherwise k8s will
> be able to wedge the k8s-openstack-provider gate.
> 
>>> On Sat, Mar 25, 2017 at 12:10 PM, Chris Hoge  wrote:
>>>> 
>>>> 
>>>> On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon
>>>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>>>>> 
>>>>>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
>>>>>>> Folks,
>>>>>>> 
>>>>>>> As discussed in the etherpad:
>>>>>>> https://etherpad.openstack.org/p/go-and-containers
>>>>>>> 
>>>>>>> Here's a request for a repo in OpenStack:
>>>>&g

Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-04 Thread Chris Hoge

> On Apr 2, 2017, at 4:16 PM, Monty Taylor  wrote:
> 
> On 04/02/2017 02:53 PM, Chris Hoge wrote:
>> Now that the provider has a repository in the OpenStack project
>> namespace, we need to move over the existing set of issues and pull
>> requests and create an initial work list for migrating patches and
>> fixing existing issues.
>> 
>> I've started up an etherpad where we can track that work[1]. In the longer
>> run we should migrate over to Launchpad or Storyboard. One question,
>> to help preserve continuity with the K8S community workflow: do we want
>> to investigate ways to allow for issue creation in the OpenStack
>> namespace on GitHub?
> 
> I do not think this is a thing we want to do. While I understand the
> urge, a project needs to live somewhere (in this case we've chosen
> OpenStack) and should behave as projects do in that location. When I
> work on Ansible, I do issues on github. When I deal with tox, I file
> issues on bitbucket. Back when I dealt with Jenkins I filed issues in
> their Jira. I do not think that filing an issue in the issue tracker for
> a project is too onerous of a request to make of someone.

Sounds reasonable.

I still want to think about how to communicate efficiently across
projects. This thread, for example, was cross posted across communities,
and has now forked as a result.

I’m personally not thrilled with cross posting. My proposal would be to
consider the openstack-dev mailing list to be the source for development
related discussions, and I can feed highlights of discussions to the
sig-k8s-openstack, and relay and relevant discussions from there back
to this list.

> We have issues turned off in all of our github mirrors, so it's highly
> unlikely someone will accidentally attempt to file an issue like the.
> (it's too bad we can't similarly turn off pull requests, but oh well)
> 
> 
>> [1] https://etherpad.openstack.org/p/k8s-provider-issue-migration
>> 
>> On Friday, March 24, 2017 at 7:27:09 AM UTC-7, Davanum Srinivas wrote:
>> 
>>Folks,
>> 
>>As discussed in the etherpad:
>>https://etherpad.openstack.org/p/go-and-containers
>><https://etherpad.openstack.org/p/go-and-containers>
>> 
>>Here's a request for a repo in OpenStack:
>>https://review.openstack.org/#/c/449641/
>><https://review.openstack.org/#/c/449641/>
>> 
>>This request pulls in the existing code from kubernetes/kubernetes
>>repo and preserves the git history too
>>https://github.com/dims/k8s-cloud-provider
>><https://github.com/dims/k8s-cloud-provider>
>> 
>>Anyone interested? please ping me on Slack or IRC and we can
>>continue this work.
>> 
>>Thanks,
>>Dims
>> 
>>-- 
>>Davanum Srinivas :: https://twitter.com/dims
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-02 Thread Chris Hoge
Now that the provider has a repository in the OpenStack project
namespace, we need to move over the existing set of issues and pull
requests and create an initial work list for migrating patches and
fixing existing issues.

I've started up an etherpad where we can track that work[1]. In the longer
run we should migrate over to Launchpad or Storyboard. One question,
to help preserve continuity with the K8S community workflow: do we want
to investigate ways to allow for issue creation in the OpenStack
namespace on GitHub?

-Chris

[1] https://etherpad.openstack.org/p/k8s-provider-issue-migration

On Friday, March 24, 2017 at 7:27:09 AM UTC-7, Davanum Srinivas wrote:
>
> Folks, 
>
> As discussed in the etherpad: 
> https://etherpad.openstack.org/p/go-and-containers 
>
> Here's a request for a repo in OpenStack: 
> https://review.openstack.org/#/c/449641/ 
>
> This request pulls in the existing code from kubernetes/kubernetes 
> repo and preserves the git history too 
> https://github.com/dims/k8s-cloud-provider 
>
> Anyone interested? please ping me on Slack or IRC and we can continue this 
> work. 
>
> Thanks, 
> Dims 
>
> -- 
> Davanum Srinivas :: https://twitter.com/dims 
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-25 Thread Chris Hoge


On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon wrote:
>
>
>
> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>
>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote: 
>> >Folks, 
>> > 
>> >As discussed in the etherpad: 
>> >https://etherpad.openstack.org/p/go-and-containers 
>> > 
>> >Here's a request for a repo in OpenStack: 
>> >https://review.openstack.org/#/c/449641/ 
>> > 
>> >This request pulls in the existing code from kubernetes/kubernetes 
>> >repo and preserves the git history too 
>> >https://github.com/dims/k8s-cloud-provider 
>> > 
>> >Anyone interested? please ping me on Slack or IRC and we can continue 
>> this work. 
>>
>> Yeah - I would love to continue the provider work on gerrit :) 
>>
>> Is there a way for us to make sure changes in the k8 master don't 
>> break our plugin? Or do we need to periodic jobs on the provider repo 
>> to catch breakages in the plugin interface? 
>>
>
> I suppose the options are either:
>
> ask k8s to add select external cloud providers in the CI
> Have a webhook in the k8s repo that triggered CI on the OSt infra 
>

Yes please to these. My preference is for the provider to remain upstream 
in k8s, but it's development has stalled out a bit. I want the best 
provider possible, but also want to make sure it's tested and visible to 
the k8s community that want to run on OpenStack. I've mentioned before that 
one of my goals is to have k8s gating on OpenStack in the same ways that it 
does on AWS and GCE.

-Chris

 

>
>> Thanks, Graham 
>>
>> >__ 
>>
>> >OpenStack Development Mailing List (not for usage questions) 
>> >Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>
>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-helm]k8s-sig-openstack meeting

2017-03-21 Thread Chris Hoge
We will not have a meeting next week, as it conflicts with KubeCon
Europe. The next sig-openstack meeting for OpenStack on K8S is
tentatively scheduled for April 11.

In the meantime, if you are attending KubeCon and want to join
sig-openstack for an informal gathering or want to catch up with
others, please add your name to this orginazational spreadsheet.

https://docs.google.com/spreadsheets/d/1mSfaE8DGYG4Ji-wa9rCSpEkPWbeyCOIgrGClOpyaAI8/edit#gid=0

Thanks,
Chris

> On Mar 13, 2017, at 10:43 AM, Chris Hoge  wrote:
> 
> Of course, US daylight savings bug bit. Please consider 18:30 UTC
> the official time. We will work out scheduling at the meeting.
> 
>> On Mar 13, 2017, at 10:36 AM, Chris Hoge  wrote:
>> 
>> Tomorrow, March 14 at 18:30 UTC/10:30 PT, we will be holding the new
>> extension to the k8s-sig-openstack meeting. This bi-weekly meeting
>> will focus on Kubernetes as an deployment platform for OpenStack.
>> This includes using Helm for orchestration.
>> 
>> For this meeting, use the zoom id: https://zoom.us/j/3843257457
>> And the etherpad: https://etherpad.openstack.org/p/openstack-helm
>> 
>> On the agenda will be the formalization of this meeting.
>> 
>> Thanks,
>> Chris
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-helm]k8s-sig-openstack meeting

2017-03-13 Thread Chris Hoge
Of course, US daylight savings bug bit. Please consider 18:30 UTC
the official time. We will work out scheduling at the meeting.

> On Mar 13, 2017, at 10:36 AM, Chris Hoge  wrote:
> 
> Tomorrow, March 14 at 18:30 UTC/10:30 PT, we will be holding the new
> extension to the k8s-sig-openstack meeting. This bi-weekly meeting
> will focus on Kubernetes as an deployment platform for OpenStack.
> This includes using Helm for orchestration.
> 
> For this meeting, use the zoom id: https://zoom.us/j/3843257457
> And the etherpad: https://etherpad.openstack.org/p/openstack-helm
> 
> On the agenda will be the formalization of this meeting.
> 
> Thanks,
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deployment][kolla][openstack-helm]k8s-sig-openstack meeting

2017-03-13 Thread Chris Hoge
Tomorrow, March 14 at 18:30 UTC/10:30 PT, we will be holding the new
extension to the k8s-sig-openstack meeting. This bi-weekly meeting
will focus on Kubernetes as an deployment platform for OpenStack.
This includes using Helm for orchestration.

For this meeting, use the zoom id: https://zoom.us/j/3843257457
And the etherpad: https://etherpad.openstack.org/p/openstack-helm

On the agenda will be the formalization of this meeting.

Thanks,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-helm] OpenStack on Helm Workgroup Meeting

2017-02-28 Thread Chris Hoge
Correction, the time is February 28 at 18:30 UTC/10:30 PT, zoom
room https://deis.zoom.us/j/668276583

My sincere apologies for the error.

> On Feb 27, 2017, at 11:45 AM, Chris Hoge  wrote:
> 
> We will be holding an OpenStack on Helm Workgroup meeting on Tuesday,
> February 28 at 19:30 UTC/11:30 PT. On the agenda will be the transition
> of the meeting from an informal working group to the
> kubernetes-sig-openstack efforts, with a proposal to seed the applications
> side of the sig-openstack meetings with this collaboration.
> 
> Meeting room:
> https://deis.zoom.us/zoomconference?m=45b8ceYiN82MrsspfNm4Go91HnZ5jVUj
> 
> Agenda:
> https://etherpad.openstack.org/p/openstack-helm
> 
> Thanks,
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-02-27 Thread Chris Hoge
> 
> On Feb 27, 2017, at 8:02 AM, Steven Hardy  wrote:
> 
> Hi all,
> 
> Over the recent PTG, and previously at the design summit in Barcelona,
> we've had some productive cross-project discussions amongst the various
> deployment teams.
> 
> It's clear that we share many common problems, such as patterns for major
> version upgrades (even if the workflow isn't identical we've all duplicated
> effort e.g around basic nova upgrade workflow recently), container images
> and other common building blocks for configuration management.
> 
> Here's a non-exhaustive list of sessions where we had some good
> cross-project discussion, and agreed a number of common problems where
> collaboration may be possible:
> 
> https://etherpad.openstack.org/p/ansible-config-mgt
> 
> https://etherpad.openstack.org/p/tripleo-kolla-kubernetes
> 
> https://etherpad.openstack.org/p/kolla-pike-ptg-images
> 
> https://etherpad.openstack.org/p/fuel-ocata-fuel-tripleo-integration
> 
> If there is interest in continuing the discussions on a more regular basis,
> I'd like to propose we start a cross-project working group:
> 
> https://wiki.openstack.org/wiki/Category:Working_Groups
> 
> If I go ahead and do this is "deployment" a sufficiently project-neutral
> term to proceed with?
> 
> I'd suggest we start with an informal WG, which it seems just requires an
> update to the wiki, e.g no need for any formal project team at this point?
> 
> Likewise I know some folks have expressed an interest in an IRC channel
> (openstack-deployment?), I'm happy to start with the ML but open to IRC
> also if someone is willing to set up the channel.
> 
> Perhaps we can start by using the tag "deployment" in all cross-project ML
> traffic, then potentially discuss IRC (or even regular meetings) if it
> becomes apparrent these would add value beyond ML discussion?
> 
> Please follow up here if anyone has other/better ideas on how to facilitate
> ongoing cross-team discussion and I'll do my best to help move things
> forward.

This is great. Of note, for the projects using K8S as a deployment
platform, we have efforts to refine the kubernetes-sig-openstack
to have a common location for collaboration with K8s projects. I
just sent an invitation the dev mailing list, but planning starts
tomorrow (Feb 28) as part of the informal OpenStack on Helm
workgroup meeting.

> Thanks!
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deployment][kolla][openstack-helm] OpenStack on Helm Workgroup Meeting

2017-02-27 Thread Chris Hoge
We will be holding an OpenStack on Helm Workgroup meeting on Tuesday,
February 28 at 19:30 UTC/11:30 PT. On the agenda will be the transition
of the meeting from an informal working group to the
kubernetes-sig-openstack efforts, with a proposal to seed the applications
side of the sig-openstack meetings with this collaboration.

Meeting room:
https://deis.zoom.us/zoomconference?m=45b8ceYiN82MrsspfNm4Go91HnZ5jVUj

Agenda:
https://etherpad.openstack.org/p/openstack-helm

Thanks,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-22 Thread Chris Hoge

> On Jun 22, 2016, at 11:24 AM, Sean Dague  wrote:
> 
> On 06/22/2016 01:59 PM, Chris Hoge wrote:
>> 
>>> On Jun 20, 2016, at 5:10 AM, Sean Dague >> <mailto:s...@dague.net>> wrote:
>>> 
>>> On 06/14/2016 07:19 PM, Chris Hoge wrote:
>>>> 
>>>>> On Jun 14, 2016, at 3:59 PM, Edward Leafe >>>> <mailto:e...@leafe.com>> wrote:
>>>>> 
>>>>> On Jun 14, 2016, at 5:50 PM, Matthew Treinish >>>> <mailto:mtrein...@kortar.org>> wrote:
>>>>> 
>>>>>> But, if we add another possible state on the defcore side like
>>>>>> conditional pass,
>>>>>> warning, yellow, etc. (the name doesn't matter) which is used to
>>>>>> indicate that
>>>>>> things on product X could only pass when strict validation was
>>>>>> disabled (and
>>>>>> be clear about where and why) then my concerns would be alleviated.
>>>>>> I just do
>>>>>> not want this to end up not being visible to end users trying to
>>>>>> evaluate
>>>>>> interoperability of different clouds using the test results.
>>>>> 
>>>>> +1
>>>>> 
>>>>> Don't fail them, but don't cover up their incompatibility, either.
>>>>> -- Ed Leafe
>>>> 
>>>> That’s not my proposal. My requirement is that vendors who want to do
>>>> this
>>>> state exactly which APIs are sending back additional data, and that this
>>>> information be published.
>>>> 
>>>> There are different levels of incompatibility. A response with
>>>> additional data
>>>> that can be safely ignored is different from a changed response that
>>>> would
>>>> cause a client to fail.
>>> 
>>> It's actually not different. It's really not.
>>> 
>>> This idea that it's safe to add response data is based on an assumption
>>> that software versions only move forward. If you have a single deploy of
>>> software, that's fine.
>>> 
>>> However as noted, we've got production clouds on Juno <-> Mitaka in the
>>> wild. Which means if we want to support horizontal transfer between
>>> clouds, the user experienced timeline might be start on a Mitaka cloud,
>>> then try to move to Juno. So anything added from Juno -> Mitaka without
>>> signaling has exactly the same client breaking behavior as removing
>>> attributes.
>>> 
>>> Which is why microversions are needed for attribute adds.
>> 
>> I’d like to note that Nova v2.0 is still a supported API, which
>> as far as I understand allows for additional attributes and
>> extensions. That Tempest doesn’t allow for disabling strict
>> checking when using a v2.0 endpoint is a problem.
>> 
>> The reporting of v2.0 in the Marketplace (which is what we do
>> right now) is also a signal to a user that there may be vendor
>> additions to the API.
>> 
>> DefCore doesn’t disallow the use of a 2.0 endpoint as part
>> of the interoperability standard.
> 
> This is a point of confusion.
> 
> The API definition did not allow that. The implementation of the API
> stack did.

And downstream vendors took advantage of that. We may
not like it, but it’s a reality in the current ecosystem.

> In Liberty the v2.0 API is optionally provided by a different backend
> stack that doesn't support extensions.
> In Mitaka it is default v2.0 API on a non extensions backend
> In Newton the old backend is deleted.
> 
> From Newton forward there is still a v2.0 API, but all the code hooks
> that provided facilities for extensions are gone.

It’s really important that the current documentation reflect the
code and intent of the dev team. As of writing this e-mail, 

"• v2 (SUPPORTED) and v2 extensions (SUPPORTED) (Will
be deprecated in the near future.)”[1]

Even with this being removed in Newton, DefCore still has
to allow for it in every supported version.

-Chris

[1] http://docs.openstack.org/developer/nova/

>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-22 Thread Chris Hoge

> On Jun 20, 2016, at 5:10 AM, Sean Dague  wrote:
> 
> On 06/14/2016 07:19 PM, Chris Hoge wrote:
>> 
>>> On Jun 14, 2016, at 3:59 PM, Edward Leafe  wrote:
>>> 
>>> On Jun 14, 2016, at 5:50 PM, Matthew Treinish  wrote:
>>> 
>>>> But, if we add another possible state on the defcore side like conditional 
>>>> pass,
>>>> warning, yellow, etc. (the name doesn't matter) which is used to indicate 
>>>> that
>>>> things on product X could only pass when strict validation was disabled 
>>>> (and
>>>> be clear about where and why) then my concerns would be alleviated. I just 
>>>> do
>>>> not want this to end up not being visible to end users trying to evaluate
>>>> interoperability of different clouds using the test results.
>>> 
>>> +1
>>> 
>>> Don't fail them, but don't cover up their incompatibility, either.
>>> -- Ed Leafe
>> 
>> That’s not my proposal. My requirement is that vendors who want to do this
>> state exactly which APIs are sending back additional data, and that this
>> information be published.
>> 
>> There are different levels of incompatibility. A response with additional 
>> data
>> that can be safely ignored is different from a changed response that would
>> cause a client to fail.
> 
> It's actually not different. It's really not.
> 
> This idea that it's safe to add response data is based on an assumption
> that software versions only move forward. If you have a single deploy of
> software, that's fine.
> 
> However as noted, we've got production clouds on Juno <-> Mitaka in the
> wild. Which means if we want to support horizontal transfer between
> clouds, the user experienced timeline might be start on a Mitaka cloud,
> then try to move to Juno. So anything added from Juno -> Mitaka without
> signaling has exactly the same client breaking behavior as removing
> attributes.
> 
> Which is why microversions are needed for attribute adds.

I’d like to note that Nova v2.0 is still a supported API, which
as far as I understand allows for additional attributes and
extensions. That Tempest doesn’t allow for disabling strict
checking when using a v2.0 endpoint is a problem.

The reporting of v2.0 in the Marketplace (which is what we do
right now) is also a signal to a user that there may be vendor
additions to the API.

DefCore doesn’t disallow the use of a 2.0 endpoint as part
of the interoperability standard.

-Chris


>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net <http://dague.net/>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-21 Thread Chris Hoge

> On Jun 20, 2016, at 6:56 AM, Doug Hellmann  wrote:
> 

> 
> I'm also, I think, edging away from the "we need to find a compromise"
> camp into the "why is this turning into such a big deal" camp. How did
> we get into a situation where the community has set a clear direction,
> the trademark certification system has a long lead time built in, and
> vendors are still not able to maintain certification?

If I had to make an educated guess, it’s because product managers
have to produce a roadmap with goals and features that consider
both what’s happening upstream, what is currently deployed, and
existing customers.

Just pulling attributes that were once ‘ok’ within the ecosystem and
now aren’t (even with lead time) isn’t as easy as “just change the
response”. It takes time, and given the year-long cycle that DefCore
has adopted for re-testing and the relative youth of the OpenStack
Powered Trademark program, it’s not unsurprising that a few
clouds have been hit by this.

I’ve spoken with all three vendors who are being hit by this, and
two definitely have a longer lead time to work on this. The third
is using the extra responses internally and are currently working
on changing their custom tools to get the extra information in
a different way.

I’ve also spoken with another vendor who is going to be caught
by it, though, and have explained the situation to them and
they are considering their options.

I am now actively telling vendors who are still sending additional
properties to start with with the Nova team upstream to have
their additional properties micro-versioned.

> Are the systems
> running modified versions of newer OpenStack that can't be certified
> with older versions of Tempest?

With the volume of bug fixes, refactoring, and general improvements
from the last year, I’d say no.

-Chris

> 
> Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge
Top posting one note and direct comments inline, I’m proposing
this as a member of the DefCore working group, but this
proposal itself has not been accepted as the forward course of
action by the working group. These are my own views as the
administrator of the program and not that of the working group
itself, which may independently reject the idea outside of the
response from the upstream devs.

I posted a link to this thread to the DefCore mailing list to make
that working group aware of the outstanding issues.

> On Jun 14, 2016, at 3:50 PM, Matthew Treinish  wrote:
> 
> On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
>> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
>>>> Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
>>>>> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>>>>>> Last year, in response to Nova micro-versioning and extension updates[1],
>>>>>> the QA team added strict API schema checking to Tempest to ensure that
>>>>>> no additional properties were added to Nova API responses[2][3]. In the
>>>>>> last year, at least three vendors participating the the OpenStack Powered
>>>>>> Trademark program have been impacted by this change, two of which
>>>>>> reported this to the DefCore Working Group mailing list earlier this 
>>>>>> year[4].
>>>>>> 
>>>>>> The DefCore Working Group determines guidelines for the OpenStack Powered
>>>>>> program, which includes capabilities with associated functional tests
>>>>>> from Tempest that must be passed, and designated sections with associated
>>>>>> upstream code [5][6]. In determining these guidelines, the working group
>>>>>> attempts to balance the future direction of development with lagging
>>>>>> indicators of deployments and user adoption.
>>>>>> 
>>>>>> After a tremendous amount of consideration, I believe that the DefCore
>>>>>> Working Group needs to implement a temporary waiver for the strict API
>>>>>> checking requirements that were introduced last year, to give downstream
>>>>>> deployers more time to catch up with the strict micro-versioning
>>>>>> requirements determined by the Nova/Compute team and enforced by the
>>>>>> Tempest/QA team.
>>>>> 
>>>>> I'm very much opposed to this being done. If we're actually concerned with
>>>>> interoperability and verify that things behave in the same manner between 
>>>>> multiple
>>>>> clouds then doing this would be a big step backwards. The fundamental 
>>>>> disconnect
>>>>> here is that the vendors who have implemented out of band extensions or 
>>>>> were
>>>>> taking advantage of previously available places to inject extra attributes
>>>>> believe that doing so means they're interoperable, which is quite far from
>>>>> reality. **The API is not a place for vendor differentiation.**
>>>> 
>>>> This is a temporary measure to address the fact that a large number
>>>> of existing tests changed their behavior, rather than having new
>>>> tests added to enforce this new requirement. The result is deployments
>>>> that previously passed these tests may no longer pass, and in fact
>>>> we have several cases where that's true with deployers who are
>>>> trying to maintain their own standard of backwards-compatibility
>>>> for their end users.
>>> 
>>> That's not what happened though. The API hasn't changed and the tests 
>>> haven't
>>> really changed either. We made our enforcement on Nova's APIs a bit 
>>> stricter to
>>> ensure nothing unexpected appeared. For the most these tests work on any 
>>> version
>>> of OpenStack. (we only test it in the gate on supported stable releases, 
>>> but I
>>> don't expect things to have drastically shifted on older releases) It also
>>> doesn't matter which version of the API you run, v2.0 or v2.1. Literally, 
>>> the
>>> only case it ever fails is when you run something extra, not from the 
>>> community,
>>> either as an extension (which themselves are going away [1]) or another 
>>> service
>>> that wraps nova or imitates nova. I&

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge

> On Jun 14, 2016, at 3:59 PM, Edward Leafe  wrote:
> 
> On Jun 14, 2016, at 5:50 PM, Matthew Treinish  wrote:
> 
>> But, if we add another possible state on the defcore side like conditional 
>> pass,
>> warning, yellow, etc. (the name doesn't matter) which is used to indicate 
>> that
>> things on product X could only pass when strict validation was disabled (and
>> be clear about where and why) then my concerns would be alleviated. I just do
>> not want this to end up not being visible to end users trying to evaluate
>> interoperability of different clouds using the test results.
> 
> +1
> 
> Don't fail them, but don't cover up their incompatibility, either.
> -- Ed Leafe

That’s not my proposal. My requirement is that vendors who want to do this
state exactly which APIs are sending back additional data, and that this
information be published.

There are different levels of incompatibility. A response with additional data
that can be safely ignored is different from a changed response that would
cause a client to fail.

One would hope that micro-versions would be able to address this exact
issue for vendors by giving them a means to propose optional but 
well-defined API response additions (not extensions) that are defined
upstream and usable by all vendors. If it’s not too off topic, can someone
from the Nova team explain how something like that would work (if it
would at all)?

-Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge

> On Jun 14, 2016, at 11:21 AM, Matthew Treinish  wrote:
> 
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>> Last year, in response to Nova micro-versioning and extension updates[1],
>> the QA team added strict API schema checking to Tempest to ensure that
>> no additional properties were added to Nova API responses[2][3]. In the
>> last year, at least three vendors participating the the OpenStack Powered
>> Trademark program have been impacted by this change, two of which
>> reported this to the DefCore Working Group mailing list earlier this year[4].
>> 
>> The DefCore Working Group determines guidelines for the OpenStack Powered
>> program, which includes capabilities with associated functional tests
>> from Tempest that must be passed, and designated sections with associated
>> upstream code [5][6]. In determining these guidelines, the working group
>> attempts to balance the future direction of development with lagging
>> indicators of deployments and user adoption.
>> 
>> After a tremendous amount of consideration, I believe that the DefCore
>> Working Group needs to implement a temporary waiver for the strict API
>> checking requirements that were introduced last year, to give downstream
>> deployers more time to catch up with the strict micro-versioning
>> requirements determined by the Nova/Compute team and enforced by the
>> Tempest/QA team.
> 
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between 
> multiple
> clouds then doing this would be a big step backwards. The fundamental 
> disconnect
> here is that the vendors who have implemented out of band extensions or were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**

Yes, it’s bad practice, but it’s also a reality, and I honestly believe that
vendors have received the message and are working on changing.

> As a user of several clouds myself I can say that having random gorp in a
> response makes it much more difficult to use my code against multiple clouds. 
> I
> have to determine which properties being returned are specific to that 
> vendor's
> cloud and if I actually need to depend on them for anything it makes whatever
> code I'm writing incompatible for using against any other cloud. (unless I
> special case that block for each cloud) Sean Dague wrote a good post where a 
> lot
> of this was covered a year ago when microversions was starting to pick up 
> steam:
> 
> https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2 
> <https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2>
> 
> I'd recommend giving it a read, he explains the user first perspective more
> clearly there.
> 
> I believe Tempest in this case is doing the right thing from an 
> interoperability
> perspective and ensuring that the API is actually the API. Not an API with 
> extra
> bits a vendor decided to add.

A few points on this, though. Right now, Nova is the only API that is
enforcing this, and the clients. While this may change in the
future, I don’t think it accurately represents the reality of what’s
happening in the ecosystem.

As mentioned before, we also need to balance the lagging nature of
DefCore as an interoperability guideline with the needs of testing
upstream changes. I’m not asking for a permanent change that
undermines the goals of Tempest for QA, rather a temporary
upstream modification that recognizes the challenges faced by
vendors in the market right now, and gives them room to continue
to align themselves with upstream. Without this, the two other 
alternatives are to:

* Have some vendors leave the Powered program unnecessarily,
  weakening it.
* Force DefCore to adopt non-upstream testing, either as a fork
  or an independent test suite.

Neither seem ideal to me.

One of my goals is to transparently strengthen the ties between
upstream and downstream development. There is a deadline
built into this proposal, and my intention is to enforce it.

> I don't think a cloud or product that does this
> to the api should be considered an interoperable OpenStack cloud and failing 
> the
> tests is the correct behavior.

I think it’s more nuanced than this, especially right now.
Only additions to responses will be considered, not changes.
These additions will be clearly labelled as variations,
signaling the differences to users. Existing clients in use
will not break. Correct behavior will eventually be enforced,
and this would be clearly signaled by both the test tool and
thr

[openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge
Last year, in response to Nova micro-versioning and extension updates[1],
the QA team added strict API schema checking to Tempest to ensure that
no additional properties were added to Nova API responses[2][3]. In the
last year, at least three vendors participating the the OpenStack Powered
Trademark program have been impacted by this change, two of which
reported this to the DefCore Working Group mailing list earlier this year[4].

The DefCore Working Group determines guidelines for the OpenStack Powered
program, which includes capabilities with associated functional tests
from Tempest that must be passed, and designated sections with associated
upstream code [5][6]. In determining these guidelines, the working group
attempts to balance the future direction of development with lagging
indicators of deployments and user adoption.

After a tremendous amount of consideration, I believe that the DefCore
Working Group needs to implement a temporary waiver for the strict API
checking requirements that were introduced last year, to give downstream
deployers more time to catch up with the strict micro-versioning
requirements determined by the Nova/Compute team and enforced by the
Tempest/QA team.

My reasoning behind this is that while the change that enabled strict
checking was discussed publicly in the developer community and took
some time to be implemented, it still landed quickly and broke several
existing deployments overnight. As Tempest has moved forward with
bug and UX fixes (some in part to support the interoperability testing
efforts of the DefCore Working Group), using an older versions of Tempest
where this strict checking is not enforced is no longer a viable solution
for downstream deployers. The TC has passed a resolution to advise
DefCore to use Tempest as the single source of capability testing[7],
but this naturally introduces tension between the competing goals of
maintaining upstream functional testing and also tracking lagging
indicators.

My proposal for addressing this problem approaches it at two levels:

* For the short term, I will submit a blueprint and patch to tempest that
  allows configuration of a grey-list of Nova APIs where strict response
  checking on additional properties will be disabled. So, for example,
  if the 'create  servers' API call returned extra properties on that call,
  the strict checking on this line[8] would be disabled at runtime.
  Use of this code path will emit a deprecation warning, and the
  code will be scheduled for removal in 2017 directly after the release
  of the 2017.01 guideline. Vendors would be required so submit the
  grey-list of APIs with additional response data that would be
  published to their marketplace entry.

* Longer term, vendors will be expected to work with upstream to update
  the API for returning additional data that is compatible with
  API micro-versioning as defined by the Nova team, and the waiver would
  no longer be allowed after the release of the 2017.01 guideline.

For the next half-year, I feel that this approach strengthens interoperability
by accurately capturing the current state of OpenStack deployments and
client tools. Before this change, additional properties on responses
weren't explicitly disallowed, and vendors and deployers took advantage
of this in production. While this is behavior that the Nova and QA teams
want to stop, it will take a bit more time to reach downstream. Also, as
of right now, as far as I know the only client that does strict response
checking for Nova responses is the Tempest client. Currently, additional
properties in responses are ignored and do not break existing client
functionality. There is currently little to no harm done to downstream
users by temporarily allowing additional data to be returned in responses.

Thanks,

Chris Hoge
Interop Engineer
OpenStack Foundation

[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-February/057613.html
[3] https://review.openstack.org/#/c/156130
[4] 
http://lists.openstack.org/pipermail/defcore-committee/2016-January/000986.html
[5] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json
[6] http://git.openstack.org/cgit/openstack/defcore/tree/2016.01.json
[7] 
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20160504-defcore-test-location.rst
[8] 
http://git.openstack.org/cgit/openstack/tempest-lib/tree/tempest_lib/api_schema/response/compute/v2_1/servers.py#n39


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-24 Thread Chris Hoge
+1

> On May 23, 2016, at 8:25 PM, Mike Perez  wrote:
> 
>> On 18:00 May 20, Nikhil Komawar wrote:
>> Hello all,
>> 
>> 
>> I want to propose having a dedicated virtual sync next week Thursday May
>> 26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
>> Glance. We are making a few updates to the spec; so it would be good to
>> have everyone on the same page and soon start merging those spec changes.
>> 
>> 
>> Also, I would like for this sync to be cross project one so that all the
>> different stakeholders are aware of the updates to this work even if you
>> just want to listen in.
>> 
>> 
>> Please vote with +1, 0, -1. Also, if the time doesn't work please
>> propose 2-3 additional time slots.
>> 
>> 
>> We can decide later on the tool and I will setup agenda if we have
>> enough interest.
> 
> +1
> 
> -- 
> Mike Perez
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Request for comment on requiring running Linux as DefCore capability

2015-12-02 Thread Chris Hoge
A recent change request for the DefCore guidelines to “Flag validation
tests as being OS specific[1]" has sparked a larger discussion about
whether DefCore should explictly require running Linux as a compute
capability. The DefCore Committee has prepared a document[2] covering
the issue and possible actions, and is requesting review and comment
from the community, Board[3], and Technical Committee. The DefCore
committee would like to bring this topic for formal discussion to the
next TC meeting on December 8 to get input from TC on this issue.

Thanks,
Chris Hoge

[1] https://review.openstack.org/#/c/244782/
[2] 
https://docs.google.com/document/d/1Q_N93hJ-8WK4C3Ktcrex0mxv4VqoAjBzP9g6cDe0JoY/edit?usp=sharing
[3] https://wiki.openstack.org/wiki/Governance/Foundation/3Dec2015BoardMeeting
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][all] New third-party-ci testing requirements for OpenStack Compatible mark

2015-09-29 Thread Chris Hoge
On Sep 29, 2015, at 8:04 AM, Erlon Cruz  wrote:
> 
> Hi Cris,
> 
> There are some questions that came to my mind.
> 
> Cinder has near zero tolerance to backends that does not have a CI running. 
> So, can one assume that all drivers in Cinder will have the "OpenStack 
> Compatible" seal?

One of the reasons we started with Cinder was because they have
have an existing program that is well maintained. Any driver passing
CI becomes eligible for the "OpenStack Compatible” mark. It’s not
automatic, and still needs a signed agreement with the Foundation.

> When you say that the driver have to 'pass' the integration tests, what tests 
> do you consider? All tests in tempest? All patches? Do you have any criteria 
> to determine if a backend is passing or not?

We’re letting the project drive what tests need to be passed. So,
taking a look at this dashboard[1] (it’s one of many that monitor
our test systems) the drivers are running the dsvm-tempest-full
tests. One of the things that the tests exercise, and we’re interested
in from the driver standpoint, are both the user-facing Cinder APIs
as well as the driver-facing APIs.

For Neutron, which we would like to help roll out in the coming year,
this would be a CI run that is defined by the Neutron development
team. We have no interest in dictating to the developers what should
be run. Instead, we want to adopt what the community considers
to be the best-practices and standards for drivers.

> About this "OpenStack Compatible" flag, how does it work? Will you hold a 
> list with the Compatible vendors? Is anything a vendor need to to in order to 
> use this?

“OpenStack Compatible” is one of the trademark programs that is
administered by the Foundation. A company that want to apply the
OpenStack logo to their product needs to sign a licensing agreement,
which gives them the right to use the logo in their marketing materials.

We also create an entry in the OpenStack Marketplace for their
product, which has information about the company and the product, but
also information about tests that the product may have passed. The
best example I can give right now is with the “OpenStack Powered”
program, where we display which Defcore guideline a product has
successfully passed[2].

Chris

[1] http://ci-watch.tintri.com/project?project=cinder&time=24+hours
[2] For example: 
http://www.openstack.org/marketplace/public-clouds/unitedstack/uos-cloud

> Thanks,
> Erlon
> 
> On Mon, Sep 28, 2015 at 5:55 PM, Kyle Mestery  <mailto:mest...@mestery.com>> wrote:
> The Neutron team also discussed this in Vancouver, you can see the etherpad 
> here [1]. We talked about the idea of creating a validation suite, and it 
> sounds like that's something we should again discuss in Tokyo for the Mitaka 
> cycle. I think a validation suite would be a great step forward for Neutron 
> third-party CI systems to use to validate they work with a release.
> 
> [1] https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty 
> <https://etherpad.openstack.org/p/YVR-neutron-third-party-ci-liberty>
> 
> On Sun, Sep 27, 2015 at 11:39 AM, Armando M.  <mailto:arma...@gmail.com>> wrote:
> 
> 
> On 25 September 2015 at 15:40, Chris Hoge  <mailto:ch...@openstack.org>> wrote:
> In November, the OpenStack Foundation will start requiring vendors requesting
> new "OpenStack Compatible" storage driver licenses to start passing the Cinder
> third-party integration tests.
> The new program was approved by the Board at
> the July meeting in Austin and follows the improvement of the testing 
> standards
> and technical requirements for the "OpenStack Powered" program. This is all
> part of the effort of the Foundation to use the OpenStack brand to guarantee a
> base-level of interoperability and consistency for OpenStack users and to
> protect the work of our community of developers by applying a trademark backed
> by their technical efforts.
> 
> The Cinder driver testing is the first step of a larger effort to apply
> community determined standards to the Foundation marketing programs. We're
> starting with Cinder because it has a successful testing program in place, and
> we have plans to extend the program to network drivers and OpenStack
> applications. We're going require CI testing for new "OpenStack Compatible"
> storage licenses starting on November 1, and plan to roll out network and
> application testing in 2016.
> 
> One of our goals is to work with project leaders and developers to help us
> define and implement these test programs. The standards for third-party
> drivers and applications should be determined by the developers and users
> in our community, who are experts in how to maintain the quality of the
>

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-29 Thread Chris Hoge

> On Sep 29, 2015, at 1:07 AM, Flavio Percoco  wrote:
> 
> On 28/09/15 16:29 -0400, Doug Hellmann wrote:
>> Excerpts from Mark Voelker's message of 2015-09-28 19:55:18 +:
>>> On Sep 28, 2015, at 9:03 AM, Doug Hellmann  wrote:
>>> >
>>> > Excerpts from John Garbutt's message of 2015-09-28 12:32:53 +0100:
>>> >> On 28 September 2015 at 12:10, Sean Dague  wrote:
>>> >>> On 09/27/2015 08:43 AM, Doug Hellmann wrote:
>>>  Excerpts from Mark Voelker's message of 2015-09-25 20:43:23 +:
>>> > On Sep 25, 2015, at 1:56 PM, Doug Hellmann  
>>> > wrote:
>>> >>
>>> >> Excerpts from Mark Voelker's message of 2015-09-25 17:42:24 +:
>>> >>> 
>>> >
>>> > Ah.  Thanks for bringing that up, because I think this may be an area 
>>> > where there’s some misconception about what DefCore is set up to do 
>>> > today.  In it’s present form, the Board of Directors has structured 
>>> > DefCore to look much more at trailing indicators of market acceptance 
>>> > rather than future technical direction.  More on that over here. [1]
>>> 
>>>  And yet future technical direction does factor in, and I'm trying
>>>  to add a new heuristic to that aspect of consideration of tests:
>>>  Do not add tests that use proxy APIs.
>>> 
>>>  If there is some compelling reason to add a capability for which
>>>  the only tests use a proxy, that's important feedback for the
>>>  contributor community and tells us we need to improve our test
>>>  coverage. If the reason to use the proxy is that no one is deploying
>>>  the proxied API publicly, that is also useful feedback, but I suspect
>>>  we will, in most cases (glance is the exception), say "Yeah, that's
>>>  not how we mean for you to run the services long-term, so don't
>>>  include that capability."
>>> >>>
>>> >>> I think we might also just realize that some of the tests are using the
>>> >>> proxy because... that's how they were originally written.
>>> >>
>>> >> From my memory, thats how we got here.
>>> >>
>>> >> The Nova tests needed to use an image API. (i.e. list images used to
>>> >> check the snapshot Nova, or similar)
>>> >>
>>> >> The Nova proxy was chosen over Glance v1 and Glance v2, mostly due to
>>> >> it being the only widely deployed option.
>>> >
>>> > Right, and I want to make sure it's clear that I am differentiating
>>> > between "these tests are bad" and "these tests are bad *for DefCore*".
>>> > We should definitely continue to test the proxy API, since it's a
>>> > feature we have and that our users rely on.
>>> >
>>> >>
>>> >>> And they could be rewritten to use native APIs.
>>> >>
>>> >> +1
>>> >> Once Glance v2 is available.
>>> >>
>>> >> Adding Glance v2 as advisory seems a good step to help drive more 
>>> >> adoption.
>>> >
>>> > I think we probably don't want to rewrite the existing tests, since
>>> > that effectively changes the contract out from under existing folks
>>> > complying with DefCore.  If we need new, parallel, tests that do
>>> > not use the proxy to make more suitable tests for DefCore to use,
>>> > we should create those.
>>> >
>>> >>
>>> >>> I do agree that "testing proxies" should not be part of Defcore, and I
>>> >>> like Doug's idea of making that a new heuristic in test selection.
>>> >>
>>> >> +1
>>> >> Thats a good thing to add.
>>> >> But I don't think we had another option in this case.
>>> >
>>> > We did have the option of leaving the feature out and highlighting the
>>> > discrepancy to the contributors so tests could be added. That
>>> > communication didn't really happen, as far as I can tell.
>>> >
>>>  Sorry, I wasn't clear. The Nova team would, I expect, view the use of
>>>  those APIs in DefCore as a reason to avoid deprecating them in the code
>>>  even if they wanted to consider them as legacy features that should be
>>>  removed. Maybe that's not true, and the Nova team would be happy to
>>>  deprecate the APIs, but I did think that part of the feedback cycle we
>>>  were establishing here was to have an indication from the outside of 
>>>  the
>>>  contributor base about what APIs are considered important enough to 
>>>  keep
>>>  alive for a long period of time.
>>> >>> I'd also agree with this. Defcore is a wider contract that we're trying
>>> >>> to get even more people to write to because that cross section should be
>>> >>> widely deployed. So deprecating something in Defcore is something I
>>> >>> think most teams, Nova included, would be very reluctant to do. It's
>>> >>> just asking for breaking your users.
>>> >>
>>> >> I can't see us removing the proxy APIs in Nova any time soon,
>>> >> regardless of DefCore, as it would break too many people.
>>> >>
>>> >> But personally, I like dropping them from Defcore, to signal that the
>>> >> best practice is to use the Glance v2 API directly, rather than the
>>> >> Nova proxy.
>>> >>
>>> >> Maybe the are just marked deprecated, but s

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Chris Hoge

> On Sep 25, 2015, at 10:12 AM, Andrew Laski  wrote:
> 
> I understand that reasoning, but still am unsure on a few things.
> 
> The direction seems to be moving towards having a requirement that the same 
> functionality is offered in two places, Nova API and Glance V2 API. That 
> seems like it would fragment adoption rather than unify it.

My hope would be that proxies would be deprecated as new capabilities
moved in. Some of this will be driven by application developers too,
though. We’re looking at an interoperability standard, which has a
natural tension between backwards compatibility and new features.

> 
> Also after digging in on image-create I feel that there may be a mixup.  The 
> image-create in Glance and image-create in Nova are two different things. In 
> Glance you create an image and send the disk image data in the request, in 
> Nova an image-create takes a snapshot of the instance provided in the 
> request.  But it seems like DefCore is treating them as equivalent unless I'm 
> misunderstanding.
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][neutron][all] New third-party-ci testing requirements for OpenStack Compatible mark

2015-09-25 Thread Chris Hoge
In November, the OpenStack Foundation will start requiring vendors requesting
new "OpenStack Compatible" storage driver licenses to start passing the Cinder
third-party integration tests. The new program was approved by the Board at
the July meeting in Austin and follows the improvement of the testing standards
and technical requirements for the "OpenStack Powered" program. This is all
part of the effort of the Foundation to use the OpenStack brand to guarantee a
base-level of interoperability and consistency for OpenStack users and to
protect the work of our community of developers by applying a trademark backed
by their technical efforts.

The Cinder driver testing is the first step of a larger effort to apply
community determined standards to the Foundation marketing programs. We're
starting with Cinder because it has a successful testing program in place, and
we have plans to extend the program to network drivers and OpenStack
applications. We're going require CI testing for new "OpenStack Compatible"
storage licenses starting on November 1, and plan to roll out network and
application testing in 2016.

One of our goals is to work with project leaders and developers to help us
define and implement these test programs. The standards for third-party
drivers and applications should be determined by the developers and users
in our community, who are experts in how to maintain the quality of the
ecosystem.

We welcome and feedback on this program, and are also happy to answer any
questions you might have.

Thanks!

Chris Hoge
Interop Engineer
OpenStack Foundation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-25 Thread Chris Hoge

> On Sep 25, 2015, at 6:59 AM, Doug Hellmann  wrote:
> 
> Excerpts from Mark Voelker's message of 2015-09-25 01:20:04 +:
>>> 
>>> On Sep 24, 2015, at 5:55 PM, Sabari Murugesan  wrote:
>>> 
>>> Hi Melanie
>>> 
>>> In general, images created by glance v1 API should be accessible using v2 
>>> and
>>> vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with 
>>> an image was
>>> causing incompatibility. These fixes were back-ported to stable/kilo.
>>> 
>>> Thanks
>>> Sabari
>>> 
>>> [1] - https://bugs.launchpad.net/glance/+bug/1447215
>>> [2] - https://bugs.launchpad.net/bugs/1419823 
>>> [3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193 
>>> 
>>> 
>>> On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:
>>> Hi All,
>>> 
>>> I have been looking and haven't yet located documentation about how to 
>>> upgrade from glance v1 to glance v2.
>>> 
>>> From what I understand, images and snapshots created with v1 can't be 
>>> listed/accessed through the v2 api. Are there instructions about how to 
>>> migrate images and snapshots from v1 to v2? Are there other 
>>> incompatibilities between v1 and v2?
>>> 
>>> I'm asking because I have read that glance v1 isn't defcore compliant and 
>>> so we need all projects to move to v2, but the incompatibility from v1 to 
>>> v2 is preventing that in nova. Is there anything else preventing v2 
>>> adoption? Could we move to glance v2 if there's a migration path from v1 to 
>>> v2 that operators can run through before upgrading to a version that uses 
>>> v2 as the default?
>> 
>> Just to clarify the DefCore situation a bit here: 
>> 
>> The DefCore Committee is considering adding some Glance v2
> capabilities [1] as “advisory” (e.g. not required now but might be
> in the future unless folks provide feedback as to why it shouldn’t
> be) in it’s next Guideline, which is due to go the Board of Directors
> in January and will cover Juno, Kilo, and Liberty [2].The Nova image
> API’s are already required [3][4].  As discussion began about which
> Glance capabilities to include and whether or not to keep the Nova
> image API’s as required, it was pointed out that the many ways images
> can currently be created in OpenStack is problematic from an
> interoperability point of view in that some clouds use one and some use
> others.  To be included in a DefCore Guideline, capabilities are scored
> against twelve Criteria [5], and need to achieve a certain total to be
> included.  Having a bunch of different ways to deal with images
> actually hurts the chances of any one of them meeting the bar because
> it makes it less likely that they’ll achieve several criteria.  For
> example:
>> 
>> One of the criteria is “widely deployed” [6].  In the case of images, both 
>> the Nova image-create API and Glance v2 are both pretty widely deployed [7]; 
>> Glance v1 isn’t, and at least one uses none of those but instead uses the 
>> import task API.
>> 
>> Another criteria is “atomic” [8] which basically means the capability is 
>> unique and can’t be built out of other required capabilities.  Since the 
>> Nova image-create API is already required and effectively does the same 
>> thing as glance v1 and v2’s image create API’s, the latter lose points.
> 
> This seems backwards. The Nova API doesn't "do the same thing" as
> the Glance API, it is a *proxy* for the Glance API. We should not
> be requiring proxy APIs for interop. DefCore should only be using
> tests that talk directly to the service that owns the feature being
> tested.

I agree in general, at the time the standard was approved the
only api we had available to us (because only nova code was
being considered for inclusion) was the proxy.

We’re looking at v2 as the required api going forward, but
as has been mentioned before, the nova proxy requires that
v1 be present as a non-public api. Not the best situation in
the world, and I’m personally looking forward to Glance,
Cinder, and Neutron becoming explicitly required APIs in
DefCore.


> Doug
> 
>> 
>> Another criteria is “future direction” [9].  Glance v1 gets no points here 
>> since v2 is the current API, has been for a while, and there’s even been 
>> some work on v3 already.
>> 
>> There are also criteria for  “used by clients” [11].  Unfortunately both 
>> Glance v1 and v2 fall down pretty hard here as it turns out that of all the 
>> client libraries users reported in the last user survey, it appears the only 
>> one other than the OpenStack clients supports Glance v2 and one supports 
>> Glance v1 while the rest all rely on the Nova API's.  Even within OpenStack 
>> we don’t necessarily have good adoption since Nova still uses the v1 API to 
>> talk to Glance and OpenStackClient didn’t support image creation with v2 
>> until this week’s 1.7.0 release. [13]
>> 
>> So, it’s a bit problematic that v1 is still being used even within the 
>> project (though it did get slightly better this week).  It’s highly unlikely 
>> at this point t

[openstack-dev] [defcore] Scoring for DefCore 2016.01 Guideline

2015-09-16 Thread Chris Hoge
The DefCore Committee is working on scoring capabilities for the upcoming
2016.01 Guideline, a solid draft of which will be available at the Mitaka
summit for community review and will go to the Board of Directors for
approaval in Janaury [1]. The current 2015.07 Guideline [2] covers Nova,
Swift, and Keystone. Scoring for the upcoming Guideline may includes new
capabilities for Neutron, Glance, Cinder, and Heat, as well as
updated capabilities for Keystone and Nova.

As part of our process, we want to encourage the development and user
communities to give feedback on the proposed capability categorizations
and how they are currently graded against the Defcore criteria [3].

Capabilities we're currently considering for possible inclusion are:
Neutron:  https://review.openstack.org/#/c/210080/
Glance:   https://review.openstack.org/#/c/213353/
Cinder:   https://review.openstack.org/#/c/221631/
Heat: https://review.openstack.org/#/c/216983/
Keystone: https://review.openstack.org/#/c/213330/
Nova: https://review.openstack.org/#/c/223915/

We would especially like to thank the PTL's and technical community members
who helped draft the proposed capabilities lists and provided
feedback--your input has been very helpful.

[1] 
http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/2015A.rst#n13
[2] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json
[3] 
http://git.openstack.org/cgit/openstack/defcore/tree/doc/source/process/CoreCriteria.rst


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] UUID Tagging Requirement and "Big Bang" Patch

2015-02-27 Thread Chris Hoge
This work has landed. New tests will now be gated against existence of an 
idempotent_id.
If you have open submissions to Tempest there’s a good possibility you’ll have 
to rebase.

-Chris

> On Feb 26, 2015, at 2:30 PM, Chris Hoge  
> wrote:
> 
> Update on this:
> 
> The tools for checking for and adding UUIDs has been completed and reviewed.
> 
> https://review.openstack.org/#/c/157273 
> <https://review.openstack.org/#/c/157273>
> 
> A new patch has been sent up that adds UUIDs to all tests
> 
> https://review.openstack.org/#/c/159633 
> <https://review.openstack.org/#/c/159633>
> 
> Note that after discussion with the openstack-qa team the decorator has
> changed to be of the form
> 
> @test.idempotent_id('12345678-1234-5678-1234-123456789abc’)
> 
> Once the second patch lands you will most certainly need to rebase your work
> and include ids in all new tests. When refactoring tests, please preserve the
> id value so that various projects (Defcore, Refstack, Rally) can track the 
> actual
> location of tests for capability testing.
> 
> Thanks,
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
>> On Feb 22, 2015, at 11:47 PM, Chris Hoge > <mailto:chris+openstack...@openstack.org>> wrote:
>> 
>> Once the gate settles down this week I’ll be sending up a major 
>> “big bang” patch to Tempest that will tag all of the tests with unique
>> identifiers, implementing this spec: 
>> 
>> https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst
>>  
>> <https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst>
>> 
>> The work in progress is here, and includes a change to the gate that
>> every test developer should be aware of.
>> 
>> https://review.openstack.org/#/c/157273/
>> 
>> All tests will now require a UUID metadata identifier, generated from the
>> uuid.uuid4 function. The form of the identifier is a decorator like:
>> 
>> @test.meta(uuid='12345678-1234-5678-1234-567812345678')
>> 
>> To aid in hacking rules, the @test.meta decorator must be directly before the
>> function definition and after the @test.services decorator, which itself
>> must appear after all other decorators.
>> 
>> The gate will now require that every test have a uuid that is indeed
>> unique.
>> 
>> This work is meant to give a stable point of reference to tests that will
>> persist through test refactoring and moving.
>> 
>> Thanks,
>> Chris Hoge
>> Interop Engineer
>> OpenStack Foundation
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] UUID Tagging Requirement and "Big Bang" Patch

2015-02-26 Thread Chris Hoge
Update on this:

The tools for checking for and adding UUIDs has been completed and reviewed.

https://review.openstack.org/#/c/157273 
<https://review.openstack.org/#/c/157273>

A new patch has been sent up that adds UUIDs to all tests

https://review.openstack.org/#/c/159633 
<https://review.openstack.org/#/c/159633>

Note that after discussion with the openstack-qa team the decorator has
changed to be of the form

@test.idempotent_id('12345678-1234-5678-1234-123456789abc’)

Once the second patch lands you will most certainly need to rebase your work
and include ids in all new tests. When refactoring tests, please preserve the
id value so that various projects (Defcore, Refstack, Rally) can track the 
actual
location of tests for capability testing.

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation

> On Feb 22, 2015, at 11:47 PM, Chris Hoge  
> wrote:
> 
> Once the gate settles down this week I’ll be sending up a major 
> “big bang” patch to Tempest that will tag all of the tests with unique
> identifiers, implementing this spec: 
> 
> https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst
> 
> The work in progress is here, and includes a change to the gate that
> every test developer should be aware of.
> 
> https://review.openstack.org/#/c/157273/
> 
> All tests will now require a UUID metadata identifier, generated from the
> uuid.uuid4 function. The form of the identifier is a decorator like:
> 
> @test.meta(uuid='12345678-1234-5678-1234-567812345678')
> 
> To aid in hacking rules, the @test.meta decorator must be directly before the
> function definition and after the @test.services decorator, which itself
> must appear after all other decorators.
> 
> The gate will now require that every test have a uuid that is indeed
> unique.
> 
> This work is meant to give a stable point of reference to tests that will
> persist through test refactoring and moving.
> 
> Thanks,
> Chris Hoge
> Interop Engineer
> OpenStack Foundation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] UUID Tagging Requirement and "Big Bang" Patch

2015-02-22 Thread Chris Hoge
Once the gate settles down this week I’ll be sending up a major 
“big bang” patch to Tempest that will tag all of the tests with unique
identifiers, implementing this spec: 

https://github.com/openstack/qa-specs/blob/master/specs/meta-data-and-uuid-for-tests.rst

The work in progress is here, and includes a change to the gate that
every test developer should be aware of.

https://review.openstack.org/#/c/157273/

All tests will now require a UUID metadata identifier, generated from the
uuid.uuid4 function. The form of the identifier is a decorator like:

@test.meta(uuid='12345678-1234-5678-1234-567812345678')

To aid in hacking rules, the @test.meta decorator must be directly before the
function definition and after the @test.services decorator, which itself
must appear after all other decorators.

The gate will now require that every test have a uuid that is indeed
unique.

This work is meant to give a stable point of reference to tests that will
persist through test refactoring and moving.

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [defcore] Proposal for new openstack/defcore repository

2015-02-12 Thread Chris Hoge
In the most recent Defcore committee meeting we discussed the need for a 
repository to host artifacts related to the Defcore process[1]. These artifacts 
will include documentation of the Defcore process, lists of advisory and 
required capabilities for releases, and useful tools and instructions for 
configuring and testing clouds against the capability lists.

One of our goals is increased community awareness and participation around the 
process, and we feel that a Gerrit backed repository helps to achieve this goal 
by providing a well understood mechanism for community members to comment on 
policies and capabilities before they are merged. For members of the community 
who aren’t familiar with the Gerrit workflow, I would be more than happy to 
help them out with understanding the process or acting as a proxy for their 
comments and concerns.

We're proposing to host the repository at openstack/defcore, as this is work 
being done by a board-backed committee with cross cutting concerns for all 
OpenStack projects. All projects are owned by some parent organization within 
the OpenStack community. One possiblility for ownership that we considered was 
the Technical Committee, with precedent set by the ownership of the API Working 
Group repository[2]. However, we felt that there is a need to allow for 
projects that are owned by the Board, and are also proposing a new Board 
ownership group.

The core reviewers of the repository will be the Defcore Committee co-chairs, 
with additional core members added at the discretion of board members leading 
the committee.

In the coming weeks we're going to be working hard on defining new capabilities 
for the Icehouse, Juno, and future releases. We're looking forward to meeting 
with the developer and operator community to help define the new capabilities 
with an eye towards interoperability across the entire OpenStack ecosystem. The 
creation of this repository is an important step in that direction.

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation

[1] https://etherpad.openstack.org/p/DefCoreScale.4
[2] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/technical-committee-repos.yaml
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev