Re: [openstack-dev] [kubernetes-python-client] Failed to get service list

2018-01-10 Thread Lingxian Kong
Thanks for the reminder, Michal.

I sent the email here because the client library is dependency of several
openstack projects, the issue I found may cause potential problems to them,
and I also want to get some hints if they already solved that.


Cheers,
Lingxian Kong (Larry)

On Thu, Jan 11, 2018 at 3:58 AM, Michal Rostecki  wrote:

> On 01/10/2018 07:40 AM, Lingxian Kong wrote:
> > I submitted an issue in github[1] the other day but didn't get any
> > response, try my luck to attract attention here in case someone else has
> > the same problem or already has a solution I didn't know, or hopefully, I
> > missed something.
> >
>
> This is not the correct mailing list to talk about that project.
> Kubernetes-incubator is a part of Kubernetes community, not OpenStack.
> If you have a problem with reaching developers of python-client on github,
> I recommend to use Kubernetes Slack.
>
> Cheers,
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes-python-client] Failed to get service list

2018-01-10 Thread Michal Rostecki
On 01/10/2018 07:40 AM, Lingxian Kong wrote:
> I submitted an issue in github[1] the other day but didn't get any
> response, try my luck to attract attention here in case someone else has
> the same problem or already has a solution I didn't know, or hopefully, I
> missed something.
> 

This is not the correct mailing list to talk about that project.
Kubernetes-incubator is a part of Kubernetes community, not OpenStack.
If you have a problem with reaching developers of python-client on github,
I recommend to use Kubernetes Slack.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kubernetes-python-client] Failed to get service list

2018-01-09 Thread Lingxian Kong
I submitted an issue in github[1] the other day but didn't get any
response, try my luck to attract attention here in case someone else has
the same problem or already has a solution I didn't know, or hopefully, I
missed something.

The problem is when I want to get service list(the result should be an
empty list), but I met with the following exception:

2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py",
>> line 12951, in list_namespaced_service
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server (data) =
>> self.list_namespaced_service_with_http_info(namespace, **kwargs)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/apis/core_v1_api.py",
>> line 13054, in list_namespaced_service_with_http_info
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server
>>  collection_formats=collection_formats)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
>> line 321, in call_api
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server
>>  _return_http_data_only, collection_formats, _preload_content,
>> _request_timeout)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
>> line 163, in __call_api
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server
>>  return_data = self.deserialize(response_data, response_type)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
>> line 236, in deserialize
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server return
>> self.__deserialize(data, response_type)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
>> line 276, in __deserialize
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server return
>> self.__deserialize_model(data, klass)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/api_client.py",
>> line 622, in __deserialize_model
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server instance
>> = klass(**kwargs)
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_service_list.py",
>> line 60, in __init__
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server
>>  self.items = items
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server   File
>> "/usr/local/lib/python2.7/dist-packages/kubernetes/client/models/v1_service_list.py",
>> line 110, in items
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server raise
>> ValueError("Invalid value for `items`, must not be `None`")
>
> 2018-01-10 06:31:58.930 6417 ERROR oslo_messaging.rpc.server ValueError:
>> Invalid value for `items`, must not be `None`
>
>
[1]: https://github.com/kubernetes-incubator/client-python/issues/424

Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][kolla][openstack-helm][magnum] Kubernetes Day at OpenStack Summit NA 2017 announced!

2017-04-13 Thread Ihor Dvoretskyi
Hello everyone,

I'm pleased to announce that the event schedule is published - you may find
it at the event page [1].

Please, join us on May 9 at OpenStack Summit venue in Boston!

Ihor

1. https://www.cncf.io/event/openstack-north-america-2017

On Tue, Feb 28, 2017 at 12:44 AM, Ihor Dvoretskyi 
wrote:

> Hello everyone,
>
> On behalf of Kubernetes Community and OpenStack SpeciaI Interest Group
> [0], I'm happy to announce Kubernetes Day at OpenStack Summit NA 2017. The
> event will be hosted by CNCF as a part of OpenStack’s Open Source Days in
> Boston [1].
>
> The CFP process is already open - feel free to submit your talk. More
> detailed information about the event you may find at the CNCF's event page
> [2].
>
> Special thanks to CNCF, OpenStack Foundation, and individuals who made
> this happen.
>
> 0. https://github.com/kubernetes/community/blob/master/sig-
> openstack/README.md
> 1. https://www.openstack.org/summit/boston-2017/open-source-days/
> 2. https://www.cncf.io/event/openstack-north-america-2017
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-06 Thread Steve Gordon
- Original Message -
> From: "Monty Taylor" <mord...@inaugust.com>
> To: openstack-dev@lists.openstack.org
> Sent: Sunday, April 2, 2017 4:16:44 PM
> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
> Provider for Kubernetes
> 
> On 04/02/2017 02:53 PM, Chris Hoge wrote:
> > Now that the provider has a repository in the OpenStack project
> > namespace, we need to move over the existing set of issues and pull
> > requests and create an initial work list for migrating patches and
> > fixing existing issues.
> > 
> > I've started up an etherpad where we can track that work[1]. In the longer
> > run we should migrate over to Launchpad or Storyboard. One question,
> > to help preserve continuity with the K8S community workflow: do we want
> > to investigate ways to allow for issue creation in the OpenStack
> > namespace on GitHub?
> 
> I do not think this is a thing we want to do. While I understand the
> urge, a project needs to live somewhere (in this case we've chosen
> OpenStack) and should behave as projects do in that location. When I
> work on Ansible, I do issues on github. When I deal with tox, I file
> issues on bitbucket. Back when I dealt with Jenkins I filed issues in
> their Jira. I do not think that filing an issue in the issue tracker for
> a project is too onerous of a request to make of someone.
> 
> We have issues turned off in all of our github mirrors, so it's highly
> unlikely someone will accidentally attempt to file an issue like the.
> (it's too bad we can't similarly turn off pull requests, but oh well)

I agree with the above comments w.r.t. tooling, but I think we will still need 
to determine what I think is at the core of Chris's concern which is in a world 
where we have extracted the cloud provider implementation from Kube (and 
externalizing these from Kube has indeed been on the table for some time, so 
thanks Dims for taking the initiative) how do we continue to work on it in the 
OpenStack community while also still maintaining - if not extending - our level 
of interop and visibility with the Kubernetes community. I think the focus of 
concern here should be less on the tools though - as you note each community 
has its own tools and that is unlikely to change - and more on communication 
but it can be difficult to decouple the two (IRC versus Slack, Zoom, etc.).

Thus far discussion of open PRs/Issues and ongoing work w.r.t. the provider 
implementation has primarily focused on the Kubernetes OpenStack SIG (the scope 
of which was recently extended to allow space for discussions/collaboration 
between the various OpenStack deployment projects and folks anchored in the 
Kubernetes side of things, specifically w.r.t. Helm. It's not immediately clear 
to me how we would prefer to maintain visibility on the Kubernetes side of the 
fence going forward because a natural progression of "this is developed, 
tested, and served up on OpenStack infra" would of course also be to move most 
of these discussions to IRC.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-04 Thread Clint Byrum
Excerpts from Chris Hoge's message of 2017-04-04 17:09:11 -0400:
> 
> > On Apr 2, 2017, at 4:29 PM, Monty Taylor <mord...@inaugust.com> wrote:
> > 
> > On 03/29/2017 03:39 PM, Steve Gordon wrote:
> >> - Original Message -
> >>> From: "Davanum Srinivas" <dava...@gmail.com>
> >>> To: "Chris Hoge" <ch...@openstack.org>
> >>> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> >>> <openstack-dev@lists.openstack.org>,
> >>> "kubernetes-sig-openstack" <kubernetes-sig-openst...@googlegroups.com>
> >>> Sent: Wednesday, March 29, 2017 2:28:29 PM
> >>> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
> >>> Provider for Kubernetes
> >>> 
> >>> Team,
> >>> 
> >>> Repo is ready:
> >>> http://git.openstack.org/cgit/openstack/k8s-cloud-provider
> >>> 
> >>> I've taken the liberty of updating it with the latest changes in the
> >>> kubernetes/kubernetes repo:
> >>> https://review.openstack.org/#/q/project:openstack/k8s-cloud-provider is
> >>> ready
> >>> 
> >>> So logical next step would be to add CI jobs to test in OpenStack
> >>> Infra. Anyone interested?
> >> 
> >> One question I have around this - do we have a shared view of what the 
> >> ideal matrix of tested combinations would like? E.g. kubernetes master on 
> >> openstack project's master, kubernetes master on openstack project's 
> >> stable branches (where available), do we also need/want to test kubernetes 
> >> stable milestones, etc.
> >> 
> >> At a high level my goal would be the same as Chris's "k8s gating on 
> >> OpenStack in the same ways that it does on AWS and GCE." which would imply 
> >> reporting results on PRs proposed to K8S master *before* they merge but 
> >> not sure we all agree on what that actually means testing against in 
> >> practice on the OpenStack side of the equation?
> > 
> > I think we want to have jobs that have the ability to test:
> > 
> > 1) A proposed change to k8s-openstack-provider against current master of
> > OpenStack
> > 2) A proposed change to k8s-openstack-provider against a stable release
> > of OpenStack
> > 3) A proposed change to OpenStack against current master of
> > k8s-openstack-provider
> > 4) A proposed change to OpenStack against stable release of
> > k8s-openstack-provider
> > 
> > Those are all easy now that the code is in gerrit, and it's well defined
> > what triggers and where it reports.
> > 
> > Additionally, we need to test the surface area between
> > k8s-openstack-provider and k8s itself. (if we wind up needing to test
> > k8s against proposed changes to OpenStack then we've likely done
> > something wrong in life)
> > 
> > 5) A proposed change to k8s-openstack-provider against current master of k8s
> > 6) A proposed change to k8s-openstack-provider against a stable release
> > of k8s
> > 7) A proposed change to k8s against current master of k8s-openstack-provider
> > 8) A proposed change to k8s against stable release of k8s-openstack-provider
> > 
> > 5 and 6 are things we can do right now. 7 and 8 will have to wait for GH
> > support to land in zuul (without which we can neither trigger test jobs
> > on proposed changes to k8s nor can we report the results back to anyone)
> 
> 7 and 8 are going to be pretty important for integrating into the K8S
> release process. At the risk of having a work item thrown at me,
> is there a target for when that feature will land?
> 

Hi! Github support is happening basically as "zuulv3+1". We're working
on it in parallel with the v3 effort, so it should be a relatively quick
+1, but I'd expect infra will need a couple months of shaking out v3
bugs and getting everything ported before we can start talking about
hooking infra's zuul up to Github.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-04 Thread Chris Hoge

> On Apr 2, 2017, at 4:29 PM, Monty Taylor <mord...@inaugust.com> wrote:
> 
> On 03/29/2017 03:39 PM, Steve Gordon wrote:
>> - Original Message -
>>> From: "Davanum Srinivas" <dava...@gmail.com>
>>> To: "Chris Hoge" <ch...@openstack.org>
>>> Cc: "OpenStack Development Mailing List (not for usage questions)" 
>>> <openstack-dev@lists.openstack.org>,
>>> "kubernetes-sig-openstack" <kubernetes-sig-openst...@googlegroups.com>
>>> Sent: Wednesday, March 29, 2017 2:28:29 PM
>>> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
>>> Provider for Kubernetes
>>> 
>>> Team,
>>> 
>>> Repo is ready:
>>> http://git.openstack.org/cgit/openstack/k8s-cloud-provider
>>> 
>>> I've taken the liberty of updating it with the latest changes in the
>>> kubernetes/kubernetes repo:
>>> https://review.openstack.org/#/q/project:openstack/k8s-cloud-provider is
>>> ready
>>> 
>>> So logical next step would be to add CI jobs to test in OpenStack
>>> Infra. Anyone interested?
>> 
>> One question I have around this - do we have a shared view of what the ideal 
>> matrix of tested combinations would like? E.g. kubernetes master on 
>> openstack project's master, kubernetes master on openstack project's stable 
>> branches (where available), do we also need/want to test kubernetes stable 
>> milestones, etc.
>> 
>> At a high level my goal would be the same as Chris's "k8s gating on 
>> OpenStack in the same ways that it does on AWS and GCE." which would imply 
>> reporting results on PRs proposed to K8S master *before* they merge but not 
>> sure we all agree on what that actually means testing against in practice on 
>> the OpenStack side of the equation?
> 
> I think we want to have jobs that have the ability to test:
> 
> 1) A proposed change to k8s-openstack-provider against current master of
> OpenStack
> 2) A proposed change to k8s-openstack-provider against a stable release
> of OpenStack
> 3) A proposed change to OpenStack against current master of
> k8s-openstack-provider
> 4) A proposed change to OpenStack against stable release of
> k8s-openstack-provider
> 
> Those are all easy now that the code is in gerrit, and it's well defined
> what triggers and where it reports.
> 
> Additionally, we need to test the surface area between
> k8s-openstack-provider and k8s itself. (if we wind up needing to test
> k8s against proposed changes to OpenStack then we've likely done
> something wrong in life)
> 
> 5) A proposed change to k8s-openstack-provider against current master of k8s
> 6) A proposed change to k8s-openstack-provider against a stable release
> of k8s
> 7) A proposed change to k8s against current master of k8s-openstack-provider
> 8) A proposed change to k8s against stable release of k8s-openstack-provider
> 
> 5 and 6 are things we can do right now. 7 and 8 will have to wait for GH
> support to land in zuul (without which we can neither trigger test jobs
> on proposed changes to k8s nor can we report the results back to anyone)

7 and 8 are going to be pretty important for integrating into the K8S
release process. At the risk of having a work item thrown at me,
is there a target for when that feature will land?

It's not critical though, sorting out every other item is a pretty
cool set of initial tests.

Of note, e2e tests have some unreliability because of things like
hard sleeps[1]. It sounds like the K8S community is trying to address
these issues, but initially we should be expecting quite a few false
negatives (where negative means test failure).

[1] https://groups.google.com/forum/#!topic/kubernetes-sig-testing/a3XUvUVmxWU

> 
> I would recommend that we make 5 and 6 non-voting until such a time as
> we are reporting on 7 and 8 back to k8s and have a reasonable
> expectation someone will pay attention to failures - otherwise k8s will
> be able to wedge the k8s-openstack-provider gate.
> 
>>> On Sat, Mar 25, 2017 at 12:10 PM, Chris Hoge <ch...@openstack.org> wrote:
>>>> 
>>>> 
>>>> On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon
>>>> wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>>>>> 
>>>>>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
>>>>>>> Folks,
>>>>>>> 
>>>>>>> As discussed in the etherpad:
>>>>>>> http

Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-04 Thread Chris Hoge

> On Apr 2, 2017, at 4:16 PM, Monty Taylor  wrote:
> 
> On 04/02/2017 02:53 PM, Chris Hoge wrote:
>> Now that the provider has a repository in the OpenStack project
>> namespace, we need to move over the existing set of issues and pull
>> requests and create an initial work list for migrating patches and
>> fixing existing issues.
>> 
>> I've started up an etherpad where we can track that work[1]. In the longer
>> run we should migrate over to Launchpad or Storyboard. One question,
>> to help preserve continuity with the K8S community workflow: do we want
>> to investigate ways to allow for issue creation in the OpenStack
>> namespace on GitHub?
> 
> I do not think this is a thing we want to do. While I understand the
> urge, a project needs to live somewhere (in this case we've chosen
> OpenStack) and should behave as projects do in that location. When I
> work on Ansible, I do issues on github. When I deal with tox, I file
> issues on bitbucket. Back when I dealt with Jenkins I filed issues in
> their Jira. I do not think that filing an issue in the issue tracker for
> a project is too onerous of a request to make of someone.

Sounds reasonable.

I still want to think about how to communicate efficiently across
projects. This thread, for example, was cross posted across communities,
and has now forked as a result.

I’m personally not thrilled with cross posting. My proposal would be to
consider the openstack-dev mailing list to be the source for development
related discussions, and I can feed highlights of discussions to the
sig-k8s-openstack, and relay and relevant discussions from there back
to this list.

> We have issues turned off in all of our github mirrors, so it's highly
> unlikely someone will accidentally attempt to file an issue like the.
> (it's too bad we can't similarly turn off pull requests, but oh well)
> 
> 
>> [1] https://etherpad.openstack.org/p/k8s-provider-issue-migration
>> 
>> On Friday, March 24, 2017 at 7:27:09 AM UTC-7, Davanum Srinivas wrote:
>> 
>>Folks,
>> 
>>As discussed in the etherpad:
>>https://etherpad.openstack.org/p/go-and-containers
>>
>> 
>>Here's a request for a repo in OpenStack:
>>https://review.openstack.org/#/c/449641/
>>
>> 
>>This request pulls in the existing code from kubernetes/kubernetes
>>repo and preserves the git history too
>>https://github.com/dims/k8s-cloud-provider
>>
>> 
>>Anyone interested? please ping me on Slack or IRC and we can
>>continue this work.
>> 
>>Thanks,
>>Dims
>> 
>>-- 
>>Davanum Srinivas :: https://twitter.com/dims
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-02 Thread Monty Taylor
On 03/29/2017 03:39 PM, Steve Gordon wrote:
> - Original Message -
>> From: "Davanum Srinivas" <dava...@gmail.com>
>> To: "Chris Hoge" <ch...@openstack.org>
>> Cc: "OpenStack Development Mailing List (not for usage questions)" 
>> <openstack-dev@lists.openstack.org>,
>> "kubernetes-sig-openstack" <kubernetes-sig-openst...@googlegroups.com>
>> Sent: Wednesday, March 29, 2017 2:28:29 PM
>> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
>> Provider for Kubernetes
>>
>> Team,
>>
>> Repo is ready:
>> http://git.openstack.org/cgit/openstack/k8s-cloud-provider
>>
>> I've taken the liberty of updating it with the latest changes in the
>> kubernetes/kubernetes repo:
>> https://review.openstack.org/#/q/project:openstack/k8s-cloud-provider is
>> ready
>>
>> So logical next step would be to add CI jobs to test in OpenStack
>> Infra. Anyone interested?
> 
> One question I have around this - do we have a shared view of what the ideal 
> matrix of tested combinations would like? E.g. kubernetes master on openstack 
> project's master, kubernetes master on openstack project's stable branches 
> (where available), do we also need/want to test kubernetes stable milestones, 
> etc.
> 
> At a high level my goal would be the same as Chris's "k8s gating on OpenStack 
> in the same ways that it does on AWS and GCE." which would imply reporting 
> results on PRs proposed to K8S master *before* they merge but not sure we all 
> agree on what that actually means testing against in practice on the 
> OpenStack side of the equation?

I think we want to have jobs that have the ability to test:

1) A proposed change to k8s-openstack-provider against current master of
OpenStack
2) A proposed change to k8s-openstack-provider against a stable release
of OpenStack
3) A proposed change to OpenStack against current master of
k8s-openstack-provider
4) A proposed change to OpenStack against stable release of
k8s-openstack-provider

Those are all easy now that the code is in gerrit, and it's well defined
what triggers and where it reports.

Additionally, we need to test the surface area between
k8s-openstack-provider and k8s itself. (if we wind up needing to test
k8s against proposed changes to OpenStack then we've likely done
something wrong in life)

5) A proposed change to k8s-openstack-provider against current master of k8s
6) A proposed change to k8s-openstack-provider against a stable release
of k8s
7) A proposed change to k8s against current master of k8s-openstack-provider
8) A proposed change to k8s against stable release of k8s-openstack-provider

5 and 6 are things we can do right now. 7 and 8 will have to wait for GH
support to land in zuul (without which we can neither trigger test jobs
on proposed changes to k8s nor can we report the results back to anyone)

I would recommend that we make 5 and 6 non-voting until such a time as
we are reporting on 7 and 8 back to k8s and have a reasonable
expectation someone will pay attention to failures - otherwise k8s will
be able to wedge the k8s-openstack-provider gate.

>> On Sat, Mar 25, 2017 at 12:10 PM, Chris Hoge <ch...@openstack.org> wrote:
>>>
>>>
>>> On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon
>>> wrote:
>>>>
>>>>
>>>>
>>>> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>>>>
>>>>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
>>>>>> Folks,
>>>>>>
>>>>>> As discussed in the etherpad:
>>>>>> https://etherpad.openstack.org/p/go-and-containers
>>>>>>
>>>>>> Here's a request for a repo in OpenStack:
>>>>>> https://review.openstack.org/#/c/449641/
>>>>>>
>>>>>> This request pulls in the existing code from kubernetes/kubernetes
>>>>>> repo and preserves the git history too
>>>>>> https://github.com/dims/k8s-cloud-provider
>>>>>>
>>>>>> Anyone interested? please ping me on Slack or IRC and we can continue
>>>>>> this work.
>>>>>
>>>>> Yeah - I would love to continue the provider work on gerrit :)
>>>>>
>>>>> Is there a way for us to make sure changes in the k8 master don't
>>>>> break our plugin? Or do we need to periodic jobs on the provider repo
>>>>> to catch breakages in the plugin interface?
>>>>
>>>>
>>>> I suppose the options are either:
>>>>
>>>&

Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-02 Thread Monty Taylor
On 04/02/2017 02:53 PM, Chris Hoge wrote:
> Now that the provider has a repository in the OpenStack project
> namespace, we need to move over the existing set of issues and pull
> requests and create an initial work list for migrating patches and
> fixing existing issues.
> 
> I've started up an etherpad where we can track that work[1]. In the longer
> run we should migrate over to Launchpad or Storyboard. One question,
> to help preserve continuity with the K8S community workflow: do we want
> to investigate ways to allow for issue creation in the OpenStack
> namespace on GitHub?

I do not think this is a thing we want to do. While I understand the
urge, a project needs to live somewhere (in this case we've chosen
OpenStack) and should behave as projects do in that location. When I
work on Ansible, I do issues on github. When I deal with tox, I file
issues on bitbucket. Back when I dealt with Jenkins I filed issues in
their Jira. I do not think that filing an issue in the issue tracker for
a project is too onerous of a request to make of someone.

We have issues turned off in all of our github mirrors, so it's highly
unlikely someone will accidentally attempt to file an issue like the.
(it's too bad we can't similarly turn off pull requests, but oh well)


> [1] https://etherpad.openstack.org/p/k8s-provider-issue-migration
> 
> On Friday, March 24, 2017 at 7:27:09 AM UTC-7, Davanum Srinivas wrote:
> 
> Folks,
> 
> As discussed in the etherpad:
> https://etherpad.openstack.org/p/go-and-containers
> 
> 
> Here's a request for a repo in OpenStack:
> https://review.openstack.org/#/c/449641/
> 
> 
> This request pulls in the existing code from kubernetes/kubernetes
> repo and preserves the git history too
> https://github.com/dims/k8s-cloud-provider
> 
> 
> Anyone interested? please ping me on Slack or IRC and we can
> continue this work.
> 
> Thanks,
> Dims
> 
> -- 
> Davanum Srinivas :: https://twitter.com/dims
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-04-02 Thread Chris Hoge
Now that the provider has a repository in the OpenStack project
namespace, we need to move over the existing set of issues and pull
requests and create an initial work list for migrating patches and
fixing existing issues.

I've started up an etherpad where we can track that work[1]. In the longer
run we should migrate over to Launchpad or Storyboard. One question,
to help preserve continuity with the K8S community workflow: do we want
to investigate ways to allow for issue creation in the OpenStack
namespace on GitHub?

-Chris

[1] https://etherpad.openstack.org/p/k8s-provider-issue-migration

On Friday, March 24, 2017 at 7:27:09 AM UTC-7, Davanum Srinivas wrote:
>
> Folks, 
>
> As discussed in the etherpad: 
> https://etherpad.openstack.org/p/go-and-containers 
>
> Here's a request for a repo in OpenStack: 
> https://review.openstack.org/#/c/449641/ 
>
> This request pulls in the existing code from kubernetes/kubernetes 
> repo and preserves the git history too 
> https://github.com/dims/k8s-cloud-provider 
>
> Anyone interested? please ping me on Slack or IRC and we can continue this 
> work. 
>
> Thanks, 
> Dims 
>
> -- 
> Davanum Srinivas :: https://twitter.com/dims 
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-29 Thread Steve Gordon
- Original Message -
> From: "Davanum Srinivas" <dava...@gmail.com>
> To: "Chris Hoge" <ch...@openstack.org>
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>,
> "kubernetes-sig-openstack" <kubernetes-sig-openst...@googlegroups.com>
> Sent: Wednesday, March 29, 2017 2:28:29 PM
> Subject: Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud 
> Provider for Kubernetes
> 
> Team,
> 
> Repo is ready:
> http://git.openstack.org/cgit/openstack/k8s-cloud-provider
> 
> I've taken the liberty of updating it with the latest changes in the
> kubernetes/kubernetes repo:
> https://review.openstack.org/#/q/project:openstack/k8s-cloud-provider is
> ready
> 
> So logical next step would be to add CI jobs to test in OpenStack
> Infra. Anyone interested?

One question I have around this - do we have a shared view of what the ideal 
matrix of tested combinations would like? E.g. kubernetes master on openstack 
project's master, kubernetes master on openstack project's stable branches 
(where available), do we also need/want to test kubernetes stable milestones, 
etc.

At a high level my goal would be the same as Chris's "k8s gating on OpenStack 
in the same ways that it does on AWS and GCE." which would imply reporting 
results on PRs proposed to K8S master *before* they merge but not sure we all 
agree on what that actually means testing against in practice on the OpenStack 
side of the equation?

Thanks,

Steve

> On Sat, Mar 25, 2017 at 12:10 PM, Chris Hoge <ch...@openstack.org> wrote:
> >
> >
> > On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon
> > wrote:
> >>
> >>
> >>
> >> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
> >>>
> >>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
> >>> >Folks,
> >>> >
> >>> >As discussed in the etherpad:
> >>> >https://etherpad.openstack.org/p/go-and-containers
> >>> >
> >>> >Here's a request for a repo in OpenStack:
> >>> >https://review.openstack.org/#/c/449641/
> >>> >
> >>> >This request pulls in the existing code from kubernetes/kubernetes
> >>> >repo and preserves the git history too
> >>> >https://github.com/dims/k8s-cloud-provider
> >>> >
> >>> >Anyone interested? please ping me on Slack or IRC and we can continue
> >>> > this work.
> >>>
> >>> Yeah - I would love to continue the provider work on gerrit :)
> >>>
> >>> Is there a way for us to make sure changes in the k8 master don't
> >>> break our plugin? Or do we need to periodic jobs on the provider repo
> >>> to catch breakages in the plugin interface?
> >>
> >>
> >> I suppose the options are either:
> >>
> >> ask k8s to add select external cloud providers in the CI
> >> Have a webhook in the k8s repo that triggered CI on the OSt infra
> >
> >
> > Yes please to these. My preference is for the provider to remain upstream
> > in
> > k8s, but it's development has stalled out a bit. I want the best provider
> > possible, but also want to make sure it's tested and visible to the k8s
> > community that want to run on OpenStack. I've mentioned before that one of
> > my goals is to have k8s gating on OpenStack in the same ways that it does
> > on
> > AWS and GCE.
> >
> > -Chris
> >
> >
> >>>
> >>>
> >>> Thanks, Graham
> >>>
> >>>
> >>> > >__
> >>> >OpenStack Development Mailing List (not for usage questions)
> >>> >Unsubscribe:
> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kubernetes-sig-openstack" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kubernetes-sig-openstack+unsubscr...@googlegroups.com.
> > To post to this group, send email to
> > kubernetes-sig-openst...@googlegroups.com.
> > To view this discussion on the web visit
> > https://groups.google.com/d/msgid/kubernetes-sig-openstack/a7b56756-7efe-4179-8467-6a689f1abe63%40googlegroups.com.
> >
> > For more options, visit https://groups.google.com/d/optout.
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Steve Gordon,
Principal Product Manager,
Red Hat OpenStack Platform

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-29 Thread Davanum Srinivas
Team,

Repo is ready:
http://git.openstack.org/cgit/openstack/k8s-cloud-provider

I've taken the liberty of updating it with the latest changes in the
kubernetes/kubernetes repo:
https://review.openstack.org/#/q/project:openstack/k8s-cloud-provider is ready

So logical next step would be to add CI jobs to test in OpenStack
Infra. Anyone interested?

Thanks,
Dims

On Sat, Mar 25, 2017 at 12:10 PM, Chris Hoge  wrote:
>
>
> On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon wrote:
>>
>>
>>
>> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>>
>>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
>>> >Folks,
>>> >
>>> >As discussed in the etherpad:
>>> >https://etherpad.openstack.org/p/go-and-containers
>>> >
>>> >Here's a request for a repo in OpenStack:
>>> >https://review.openstack.org/#/c/449641/
>>> >
>>> >This request pulls in the existing code from kubernetes/kubernetes
>>> >repo and preserves the git history too
>>> >https://github.com/dims/k8s-cloud-provider
>>> >
>>> >Anyone interested? please ping me on Slack or IRC and we can continue
>>> > this work.
>>>
>>> Yeah - I would love to continue the provider work on gerrit :)
>>>
>>> Is there a way for us to make sure changes in the k8 master don't
>>> break our plugin? Or do we need to periodic jobs on the provider repo
>>> to catch breakages in the plugin interface?
>>
>>
>> I suppose the options are either:
>>
>> ask k8s to add select external cloud providers in the CI
>> Have a webhook in the k8s repo that triggered CI on the OSt infra
>
>
> Yes please to these. My preference is for the provider to remain upstream in
> k8s, but it's development has stalled out a bit. I want the best provider
> possible, but also want to make sure it's tested and visible to the k8s
> community that want to run on OpenStack. I've mentioned before that one of
> my goals is to have k8s gating on OpenStack in the same ways that it does on
> AWS and GCE.
>
> -Chris
>
>
>>>
>>>
>>> Thanks, Graham
>>>
>>>
>>> > >__
>>> >OpenStack Development Mailing List (not for usage questions)
>>> >Unsubscribe:
>>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-openstack" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-sig-openstack+unsubscr...@googlegroups.com.
> To post to this group, send email to
> kubernetes-sig-openst...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubernetes-sig-openstack/a7b56756-7efe-4179-8467-6a689f1abe63%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-25 Thread Chris Hoge


On Friday, March 24, 2017 at 8:46:42 AM UTC-7, Antoni Segura Puimedon wrote:
>
>
>
> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>
>> On 24/03/17 10:27 -0400, Davanum Srinivas wrote: 
>> >Folks, 
>> > 
>> >As discussed in the etherpad: 
>> >https://etherpad.openstack.org/p/go-and-containers 
>> > 
>> >Here's a request for a repo in OpenStack: 
>> >https://review.openstack.org/#/c/449641/ 
>> > 
>> >This request pulls in the existing code from kubernetes/kubernetes 
>> >repo and preserves the git history too 
>> >https://github.com/dims/k8s-cloud-provider 
>> > 
>> >Anyone interested? please ping me on Slack or IRC and we can continue 
>> this work. 
>>
>> Yeah - I would love to continue the provider work on gerrit :) 
>>
>> Is there a way for us to make sure changes in the k8 master don't 
>> break our plugin? Or do we need to periodic jobs on the provider repo 
>> to catch breakages in the plugin interface? 
>>
>
> I suppose the options are either:
>
> ask k8s to add select external cloud providers in the CI
> Have a webhook in the k8s repo that triggered CI on the OSt infra 
>

Yes please to these. My preference is for the provider to remain upstream 
in k8s, but it's development has stalled out a bit. I want the best 
provider possible, but also want to make sure it's tested and visible to 
the k8s community that want to run on OpenStack. I've mentioned before that 
one of my goals is to have k8s gating on OpenStack in the same ways that it 
does on AWS and GCE.

-Chris

 

>
>> Thanks, Graham 
>>
>> >__ 
>>
>> >OpenStack Development Mailing List (not for usage questions) 
>> >Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>
>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-24 Thread Monty Taylor
On 03/24/2017 11:22 AM, Graham Hayes wrote:
> On 24/03/17 08:46 -0700, Antoni Segura Puimedon wrote:
>>
>>
>> On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>>
>>On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
>>>Folks,
>>>
>>>As discussed in the etherpad:
>>>https://etherpad.openstack.org/p/go-and-containers
>>>
>>>Here's a request for a repo in OpenStack:
>>>https://review.openstack.org/#/c/449641/
>>>
>>>This request pulls in the existing code from kubernetes/kubernetes
>>>repo and preserves the git history too
>>>https://github.com/dims/k8s-cloud-provider
>>>
>>>Anyone interested? please ping me on Slack or IRC and we can
>> continue this
>>work.
>>
>>Yeah - I would love to continue the provider work on gerrit :)
>>
>>Is there a way for us to make sure changes in the k8 master don't
>>break our plugin? Or do we need to periodic jobs on the provider repo
>>to catch breakages in the plugin interface?
>>
>>
>> I suppose the options are either:
>>
>> ask k8s to add select external cloud providers in the CI
>> Have a webhook in the k8s repo that triggered CI on the OSt infra 
>>  
> 
> Yup - I just want to have us get our ducks in a row before we make a
> move.
> 
> From our side, we should look at the support matrix of what OpenStack
> versions we support, and how we plan on testing them in -infra.

We will have better first-class support for this in a few months as part
of rolling out zuul v3. Once the github branch lands and we get v3
rolled out for non-infra projects, we'll be able to cross-test things in
gerrit with things not in gerrit (we have a similar need to be able to
test that ansible PRs don't break zuul)

For now, if you can make sure that you have a test that can install the
k8s repo from source, and also that is structured such that if it
discovers that the k8s repo is there that it will not re-clone, we
should be able to upgrade that in the future to having zuul manage the
triggering and cloning.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-24 Thread Graham Hayes

On 24/03/17 08:46 -0700, Antoni Segura Puimedon wrote:



On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:

   On 24/03/17 10:27 -0400, Davanum Srinivas wrote:
   >Folks,
   >
   >As discussed in the etherpad:
   >https://etherpad.openstack.org/p/go-and-containers
   >
   >Here's a request for a repo in OpenStack:
   >https://review.openstack.org/#/c/449641/
   >
   >This request pulls in the existing code from kubernetes/kubernetes
   >repo and preserves the git history too
   >https://github.com/dims/k8s-cloud-provider
   >
   >Anyone interested? please ping me on Slack or IRC and we can continue this
   work.

   Yeah - I would love to continue the provider work on gerrit :)

   Is there a way for us to make sure changes in the k8 master don't
   break our plugin? Or do we need to periodic jobs on the provider repo
   to catch breakages in the plugin interface?


I suppose the options are either:

ask k8s to add select external cloud providers in the CI
Have a webhook in the k8s repo that triggered CI on the OSt infra 
 


Yup - I just want to have us get our ducks in a row before we make a
move.


From our side, we should look at the support matrix of what OpenStack

versions we support, and how we plan on testing them in -infra.



   Thanks, Graham

   >__
   >OpenStack Development Mailing List (not for usage questions)
   >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
   >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-24 Thread Antoni Segura Puimedon


On Friday, March 24, 2017 at 3:59:18 PM UTC+1, Graham Hayes wrote:
>
> On 24/03/17 10:27 -0400, Davanum Srinivas wrote: 
> >Folks, 
> > 
> >As discussed in the etherpad: 
> >https://etherpad.openstack.org/p/go-and-containers 
> > 
> >Here's a request for a repo in OpenStack: 
> >https://review.openstack.org/#/c/449641/ 
> > 
> >This request pulls in the existing code from kubernetes/kubernetes 
> >repo and preserves the git history too 
> >https://github.com/dims/k8s-cloud-provider 
> > 
> >Anyone interested? please ping me on Slack or IRC and we can continue 
> this work. 
>
> Yeah - I would love to continue the provider work on gerrit :) 
>
> Is there a way for us to make sure changes in the k8 master don't 
> break our plugin? Or do we need to periodic jobs on the provider repo 
> to catch breakages in the plugin interface? 
>

I suppose the options are either:

ask k8s to add select external cloud providers in the CI
Have a webhook in the k8s repo that triggered CI on the OSt infra 
 

>
> Thanks, Graham 
>
> >__ 
>
> >OpenStack Development Mailing List (not for usage questions) 
> >Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-24 Thread Graham Hayes

On 24/03/17 10:27 -0400, Davanum Srinivas wrote:

Folks,

As discussed in the etherpad:
https://etherpad.openstack.org/p/go-and-containers

Here's a request for a repo in OpenStack:
https://review.openstack.org/#/c/449641/

This request pulls in the existing code from kubernetes/kubernetes
repo and preserves the git history too
https://github.com/dims/k8s-cloud-provider

Anyone interested? please ping me on Slack or IRC and we can continue this work.


Yeah - I would love to continue the provider work on gerrit :)

Is there a way for us to make sure changes in the k8 master don't
break our plugin? Or do we need to periodic jobs on the provider repo
to catch breakages in the plugin interface?

Thanks, Graham


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kubernetes][go] External OpenStack Cloud Provider for Kubernetes

2017-03-24 Thread Davanum Srinivas
Folks,

As discussed in the etherpad:
https://etherpad.openstack.org/p/go-and-containers

Here's a request for a repo in OpenStack:
https://review.openstack.org/#/c/449641/

This request pulls in the existing code from kubernetes/kubernetes
repo and preserves the git history too
https://github.com/dims/k8s-cloud-provider

Anyone interested? please ping me on Slack or IRC and we can continue this work.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kubernetes][kolla][openstack-helm][magnum] Kubernetes Day at OpenStack Summit NA 2017 announced!

2017-02-27 Thread Ihor Dvoretskyi
Hello everyone,

On behalf of Kubernetes Community and OpenStack SpeciaI Interest Group [0],
I'm happy to announce Kubernetes Day at OpenStack Summit NA 2017. The event
will be hosted by CNCF as a part of OpenStack’s Open Source Days in Boston
[1].

The CFP process is already open - feel free to submit your talk. More
detailed information about the event you may find at the CNCF's event page
[2].

Special thanks to CNCF, OpenStack Foundation, and individuals who made this
happen.

0. https://github.com/kubernetes/community/blob/
master/sig-openstack/README.md
1. https://www.openstack.org/summit/boston-2017/open-source-days/
2. https://www.cncf.io/event/openstack-north-america-2017
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes][kolla]

2016-05-26 Thread Ryan Hallisey
I think the community will want to split apart the CLI to run tasks.  This was 
an idea being thrown around at the same time
as the ectd addition.  This would give the operator the ability to like you 
said, to skip any task that isn't required.

Using etcd is a way for the operator to guarantee that a bootstrapping task can 
run without another service interrupting it.
The goal is to try and make use of the Kubernetes like workflow as much as 
possible.  I agree, the community should avoid
automagic setup.  It can lead to a lot of dangerous corner cases. I think Kolla 
learned this lesson way back during the
compose era.

The tasks are define as:
  - bootstrap /
  - deploy /

Any further workflow tweaking could be handled by contacting etcd.  The 
community could also break down the tasks further
if there is a use case for it.

Thanks,
Ryan

- Original Message -
From: "Kevin M Fox" <kevin@pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Sent: Thursday, May 26, 2016 11:41:12 AM
Subject: Re: [openstack-dev] [kubernetes]

Two issues I can see with that approach.

1. It needs to be incredibly well documented as well as tools provided to 
update states in etcd manually when an op needs to recover from things 
partially working.
2. Consider the case where an op has an existing cloud. He/She installs k8s on 
their existing control plane, and then one openstack service at a time wants to 
"upgrade" the system from non container to containers. If the user wants to do 
so, with the jobs method, the op just skips the bootstrap jobs. With magic 
baked into the containers and etcd, the same kinds of things in issue #1 needs 
fixing in etcd so it doesn't try and reinit things. This makes it harder to get 
clouds migrated to kolla-k8s.

I know the idea is to try and simplify deployment by making the containers do 
all the initing automagically. but I'm afraid that just sweeps issues under the 
rug, out of the light where they still will come up, but more unexpectedly. The 
ops still need to understand the automagic that is happening. As an Op, I'd 
rather it be explicit, out front, where I know its happening, and I can easily 
tweak the workflow when necessary to get out of a bind.

Thanks,
Kevin

From: Ryan Hallisey [rhall...@redhat.com]
Sent: Thursday, May 26, 2016 5:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kubernetes]

Thanks for the feedback Kevin.

The community has been investigation other options this week.  The option that 
is currently being looked at involves
using etcd to provide a locking mechanic so that services in the cluster are 
aware bootstrapping is underway.

The concept involves extending kolla's dockerfiles and having them poll etcd to 
determine whether a bootstrap is in progress or complete [1].

I'll follow up by adding this to the spec.

Thanks,
Ryan

[1] - https://review.openstack.org/#/c/320744/

- Original Message -
From: "Kevin M Fox" <kevin@pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Sent: Monday, May 23, 2016 11:37:33 AM
Subject: Re: [openstack-dev] [kubernetes]

+1 for using k8s to do work where possible.

-1 for trying to shoehorn a feature in so that k8s can deal with stuff its not 
ready to handle. We need to ensure Operators have everything they need in order 
to successfully operate their cloud.

The current upgrade stuff in k8s is focused around replacing one, usually 
stateless, thing for another. It never had Database Schema upgrades in mind.  
It is great to use for minor version bumps. It is insufficient for major 
OpenStack upgrades. If you follow the OpenStack release notes, they tend to be 
quite linear, in a workflow. K8s isn't designed for that. Hence the need for a 
tool outside of k8s to drive the creation/upgrading of Deployments and Jobs in 
the proper order.

Init containers also do not look like a good fit. As far as I can gather from 
the spec, they are intended to init something on a node when a pod is spawned. 
This is a very different thing from upgrading a shared database's schema. I 
don't believe they should be used for that.

I've upgraded many OpenStack clouds over the years. One of the things that has 
bit me from time to time is a failed schema update and having to tweak code and 
then rerun schema upgrades. This will continue to happen and needs to be 
covered. The Job's workflow as discussed in the spec allows an operator to do 
just that. Hiding it in an init container makes that much harder for Operators.

As an Op, we need the ability to tweak the workflow as needed and run/rerun 
only the pieces that we need.

Thanks,
Kevin

From: Ryan Hallisey [rhall...@redhat.com]
Sent: Sunday, M

Re: [openstack-dev] [kubernetes]

2016-05-26 Thread Fox, Kevin M
Two issues I can see with that approach.

1. It needs to be incredibly well documented as well as tools provided to 
update states in etcd manually when an op needs to recover from things 
partially working.
2. Consider the case where an op has an existing cloud. He/She installs k8s on 
their existing control plane, and then one openstack service at a time wants to 
"upgrade" the system from non container to containers. If the user wants to do 
so, with the jobs method, the op just skips the bootstrap jobs. With magic 
baked into the containers and etcd, the same kinds of things in issue #1 needs 
fixing in etcd so it doesn't try and reinit things. This makes it harder to get 
clouds migrated to kolla-k8s.

I know the idea is to try and simplify deployment by making the containers do 
all the initing automagically. but I'm afraid that just sweeps issues under the 
rug, out of the light where they still will come up, but more unexpectedly. The 
ops still need to understand the automagic that is happening. As an Op, I'd 
rather it be explicit, out front, where I know its happening, and I can easily 
tweak the workflow when necessary to get out of a bind.

Thanks,
Kevin

From: Ryan Hallisey [rhall...@redhat.com]
Sent: Thursday, May 26, 2016 5:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kubernetes]

Thanks for the feedback Kevin.

The community has been investigation other options this week.  The option that 
is currently being looked at involves
using etcd to provide a locking mechanic so that services in the cluster are 
aware bootstrapping is underway.

The concept involves extending kolla's dockerfiles and having them poll etcd to 
determine whether a bootstrap is in progress or complete [1].

I'll follow up by adding this to the spec.

Thanks,
Ryan

[1] - https://review.openstack.org/#/c/320744/

- Original Message -
From: "Kevin M Fox" <kevin@pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Sent: Monday, May 23, 2016 11:37:33 AM
Subject: Re: [openstack-dev] [kubernetes]

+1 for using k8s to do work where possible.

-1 for trying to shoehorn a feature in so that k8s can deal with stuff its not 
ready to handle. We need to ensure Operators have everything they need in order 
to successfully operate their cloud.

The current upgrade stuff in k8s is focused around replacing one, usually 
stateless, thing for another. It never had Database Schema upgrades in mind.  
It is great to use for minor version bumps. It is insufficient for major 
OpenStack upgrades. If you follow the OpenStack release notes, they tend to be 
quite linear, in a workflow. K8s isn't designed for that. Hence the need for a 
tool outside of k8s to drive the creation/upgrading of Deployments and Jobs in 
the proper order.

Init containers also do not look like a good fit. As far as I can gather from 
the spec, they are intended to init something on a node when a pod is spawned. 
This is a very different thing from upgrading a shared database's schema. I 
don't believe they should be used for that.

I've upgraded many OpenStack clouds over the years. One of the things that has 
bit me from time to time is a failed schema update and having to tweak code and 
then rerun schema upgrades. This will continue to happen and needs to be 
covered. The Job's workflow as discussed in the spec allows an operator to do 
just that. Hiding it in an init container makes that much harder for Operators.

As an Op, we need the ability to tweak the workflow as needed and run/rerun 
only the pieces that we need.

Thanks,
Kevin

From: Ryan Hallisey [rhall...@redhat.com]
Sent: Sunday, May 22, 2016 12:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev]  [kolla][kolla-kubernetes][kubernetes]

Hi all,

At the Kolla meeting last week, I brought up some of the challenges around the 
bootstrapping
process in Kubernetes.  The main highlight revolved around how the 
bootstrapping process will
work.

Currently, in the kolla-kubernetes spec [1], the process for bootstrapping 
involves
outside orchestration running Kubernetes 'Jobs' that will handle the database 
initialization,
creating users, etc...  One of the flaws in this approach, is that 
kolla-kubernetes can't use
native Kubernetes upgrade tooling. Kubernetes does upgrades as a single action 
that scales
down running containers and replaces them with the upgraded containers. So 
instead of having
Kubernetes manage the upgrade, it would be guided by an external engine.  Not 
saying this is
a bad thing, but it does loosen the control Kubernetes would have over stack 
management.

Kubernetes does have some incoming new features that are a step in the right 
direction to
allow for kolla-kubernetes to make complete use of K

Re: [openstack-dev] [kubernetes]

2016-05-26 Thread Ryan Hallisey
Thanks for the feedback Kevin.

The community has been investigation other options this week.  The option that 
is currently being looked at involves
using etcd to provide a locking mechanic so that services in the cluster are 
aware bootstrapping is underway.

The concept involves extending kolla's dockerfiles and having them poll etcd to 
determine whether a bootstrap is in progress or complete [1].

I'll follow up by adding this to the spec.

Thanks,
Ryan

[1] - https://review.openstack.org/#/c/320744/

- Original Message -
From: "Kevin M Fox" <kevin@pnnl.gov>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Sent: Monday, May 23, 2016 11:37:33 AM
Subject: Re: [openstack-dev] [kubernetes]

+1 for using k8s to do work where possible.

-1 for trying to shoehorn a feature in so that k8s can deal with stuff its not 
ready to handle. We need to ensure Operators have everything they need in order 
to successfully operate their cloud.

The current upgrade stuff in k8s is focused around replacing one, usually 
stateless, thing for another. It never had Database Schema upgrades in mind.  
It is great to use for minor version bumps. It is insufficient for major 
OpenStack upgrades. If you follow the OpenStack release notes, they tend to be 
quite linear, in a workflow. K8s isn't designed for that. Hence the need for a 
tool outside of k8s to drive the creation/upgrading of Deployments and Jobs in 
the proper order.

Init containers also do not look like a good fit. As far as I can gather from 
the spec, they are intended to init something on a node when a pod is spawned. 
This is a very different thing from upgrading a shared database's schema. I 
don't believe they should be used for that.

I've upgraded many OpenStack clouds over the years. One of the things that has 
bit me from time to time is a failed schema update and having to tweak code and 
then rerun schema upgrades. This will continue to happen and needs to be 
covered. The Job's workflow as discussed in the spec allows an operator to do 
just that. Hiding it in an init container makes that much harder for Operators.

As an Op, we need the ability to tweak the workflow as needed and run/rerun 
only the pieces that we need.

Thanks,
Kevin

From: Ryan Hallisey [rhall...@redhat.com]
Sent: Sunday, May 22, 2016 12:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev]  [kolla][kolla-kubernetes][kubernetes]

Hi all,

At the Kolla meeting last week, I brought up some of the challenges around the 
bootstrapping
process in Kubernetes.  The main highlight revolved around how the 
bootstrapping process will
work.

Currently, in the kolla-kubernetes spec [1], the process for bootstrapping 
involves
outside orchestration running Kubernetes 'Jobs' that will handle the database 
initialization,
creating users, etc...  One of the flaws in this approach, is that 
kolla-kubernetes can't use
native Kubernetes upgrade tooling. Kubernetes does upgrades as a single action 
that scales
down running containers and replaces them with the upgraded containers. So 
instead of having
Kubernetes manage the upgrade, it would be guided by an external engine.  Not 
saying this is
a bad thing, but it does loosen the control Kubernetes would have over stack 
management.

Kubernetes does have some incoming new features that are a step in the right 
direction to
allow for kolla-kubernetes to make complete use of Kubernetes tooling like init 
containers [2].
There is also the introduction to wait.for conditions in the kubectl [3].

   kubectl get pod my-pod --wait --wait-for="pod-running"

Upgrades will be in the distant future for kolla-kubernetes, but I want to make 
sure the
community maintains an open mind about bootstrap/upgrades since there are 
potentially many
options that could come down the road.

I encourage everyone to add your input to the spec!

Thanks,
Ryan

[1] SPEC - https://review.openstack.org/#/c/304182/
[2] Init containers - https://github.com/kubernetes/kubernetes/pull/23567
[3] wait.for kubectl - https://github.com/kubernetes/kubernetes/issues/1899

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@li

Re: [openstack-dev] [kubernetes]

2016-05-23 Thread Fox, Kevin M
+1 for using k8s to do work where possible.

-1 for trying to shoehorn a feature in so that k8s can deal with stuff its not 
ready to handle. We need to ensure Operators have everything they need in order 
to successfully operate their cloud.

The current upgrade stuff in k8s is focused around replacing one, usually 
stateless, thing for another. It never had Database Schema upgrades in mind.  
It is great to use for minor version bumps. It is insufficient for major 
OpenStack upgrades. If you follow the OpenStack release notes, they tend to be 
quite linear, in a workflow. K8s isn't designed for that. Hence the need for a 
tool outside of k8s to drive the creation/upgrading of Deployments and Jobs in 
the proper order.

Init containers also do not look like a good fit. As far as I can gather from 
the spec, they are intended to init something on a node when a pod is spawned. 
This is a very different thing from upgrading a shared database's schema. I 
don't believe they should be used for that.

I've upgraded many OpenStack clouds over the years. One of the things that has 
bit me from time to time is a failed schema update and having to tweak code and 
then rerun schema upgrades. This will continue to happen and needs to be 
covered. The Job's workflow as discussed in the spec allows an operator to do 
just that. Hiding it in an init container makes that much harder for Operators.

As an Op, we need the ability to tweak the workflow as needed and run/rerun 
only the pieces that we need.

Thanks,
Kevin

From: Ryan Hallisey [rhall...@redhat.com]
Sent: Sunday, May 22, 2016 12:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev]  [kolla][kolla-kubernetes][kubernetes]

Hi all,

At the Kolla meeting last week, I brought up some of the challenges around the 
bootstrapping
process in Kubernetes.  The main highlight revolved around how the 
bootstrapping process will
work.

Currently, in the kolla-kubernetes spec [1], the process for bootstrapping 
involves
outside orchestration running Kubernetes 'Jobs' that will handle the database 
initialization,
creating users, etc...  One of the flaws in this approach, is that 
kolla-kubernetes can't use
native Kubernetes upgrade tooling. Kubernetes does upgrades as a single action 
that scales
down running containers and replaces them with the upgraded containers. So 
instead of having
Kubernetes manage the upgrade, it would be guided by an external engine.  Not 
saying this is
a bad thing, but it does loosen the control Kubernetes would have over stack 
management.

Kubernetes does have some incoming new features that are a step in the right 
direction to
allow for kolla-kubernetes to make complete use of Kubernetes tooling like init 
containers [2].
There is also the introduction to wait.for conditions in the kubectl [3].

   kubectl get pod my-pod --wait --wait-for="pod-running"

Upgrades will be in the distant future for kolla-kubernetes, but I want to make 
sure the
community maintains an open mind about bootstrap/upgrades since there are 
potentially many
options that could come down the road.

I encourage everyone to add your input to the spec!

Thanks,
Ryan

[1] SPEC - https://review.openstack.org/#/c/304182/
[2] Init containers - https://github.com/kubernetes/kubernetes/pull/23567
[3] wait.for kubectl - https://github.com/kubernetes/kubernetes/issues/1899

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes] Introducing the Kubernetes OpenStack Special Interest Group

2016-04-24 Thread Zhipeng Huang
got it thanks :)

On Sun, Apr 24, 2016 at 10:52 PM, Steve Gordon  wrote:

> - Original Message -
> > From: "Zhipeng Huang" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> >
> > Hi Ihor,
> >
> > This is a great news ! As a matter of fact at the moment the Tricircle
> team
> > from OpenStack is working with Ubernetes team lead by Quinton to explore
> > features about how OpenStack federation could help K8s federation works
> > better. This SIG seems like a good fit for our ongoing work.
> >
> > Will you guys have a session regarding this at the summit ? How could we
> > approach you guys ?
>
> Hi Zhipeng,
>
> Communications info for the SIG and initial scoping is listed at the
> bottom of the blog post Ihor linked:
>
>
> http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html
>
> We do not have a formal session of our own, although there is some
> discussion occurring on the mailing list about an informal gathering during
> the week, but rather will be focused on working within the existing efforts
> (e.g. the Ops sessions on containerization to gather feedback, the relevant
> kolla/kuryr/magnum design sessions, etc.).
>
> Thanks,
>
> Steve
>
> > On Sat, Apr 23, 2016 at 8:19 AM, Ihor Dvoretskyi <
> idvorets...@mirantis.com>
> > wrote:
> >
> > > Colleagues, I'm happy to announce within the OpenStack community the
> > > Kubernetes OpenStack Special Interest Group.
> > >
> > > Kubernetes Community currently runs to the deeper integration between
> > > OpenStack and Kubernetes and one of the main aims now - to work on
> enabling
> > > OpenStack as a platform for running Kubernetes clusters; and
> Kubernetes as
> > > the underlying layer for running OpenStack workloads.
> > >
> > > Steve Gordon and I have prepared a blog post, which briefly describes
> our
> > > activities within the community - [1].
> > >
> > > If you have any questions or suggestions regarding the Kubernetes and
> > > OpenStack-related activities, don't hesitate to join us - [2]. And of
> > > course, you may reach us on the OpenStack Summit'16 in Austin!
> > >
> > > [1]
> > >
> http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html
> > > [2] https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack
> > >
> > > --
> > > Best regards,
> > >
> > > Ihor Dvoretskyi,
> > > OpenStack Operations Engineer
> > >
> > > ---
> > >
> > > Mirantis, Inc. (925) 808-FUEL
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> >
> > --
> > Zhipeng (Howard) Huang
> >
> > Standard Engineer
> > IT Standard & Patent/IT Prooduct Line
> > Huawei Technologies Co,. Ltd
> > Email: huangzhip...@huawei.com
> > Office: Huawei Industrial Base, Longgang, Shenzhen
> >
> > (Previous)
> > Research Assistant
> > Mobile Ad-Hoc Network Lab, Calit2
> > University of California, Irvine
> > Email: zhipe...@uci.edu
> > Office: Calit2 Building Room 2402
> >
> > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Steve Gordon,
> Principal Product Manager,
> Red Hat OpenStack Platform
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes] Introducing the Kubernetes OpenStack Special Interest Group

2016-04-24 Thread Steve Gordon
- Original Message -
> From: "Zhipeng Huang" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Hi Ihor,
> 
> This is a great news ! As a matter of fact at the moment the Tricircle team
> from OpenStack is working with Ubernetes team lead by Quinton to explore
> features about how OpenStack federation could help K8s federation works
> better. This SIG seems like a good fit for our ongoing work.
> 
> Will you guys have a session regarding this at the summit ? How could we
> approach you guys ?

Hi Zhipeng,

Communications info for the SIG and initial scoping is listed at the bottom of 
the blog post Ihor linked:

http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html

We do not have a formal session of our own, although there is some discussion 
occurring on the mailing list about an informal gathering during the week, but 
rather will be focused on working within the existing efforts (e.g. the Ops 
sessions on containerization to gather feedback, the relevant 
kolla/kuryr/magnum design sessions, etc.).

Thanks,

Steve

> On Sat, Apr 23, 2016 at 8:19 AM, Ihor Dvoretskyi 
> wrote:
> 
> > Colleagues, I'm happy to announce within the OpenStack community the
> > Kubernetes OpenStack Special Interest Group.
> >
> > Kubernetes Community currently runs to the deeper integration between
> > OpenStack and Kubernetes and one of the main aims now - to work on enabling
> > OpenStack as a platform for running Kubernetes clusters; and Kubernetes as
> > the underlying layer for running OpenStack workloads.
> >
> > Steve Gordon and I have prepared a blog post, which briefly describes our
> > activities within the community - [1].
> >
> > If you have any questions or suggestions regarding the Kubernetes and
> > OpenStack-related activities, don't hesitate to join us - [2]. And of
> > course, you may reach us on the OpenStack Summit'16 in Austin!
> >
> > [1]
> > http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html
> > [2] https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack
> >
> > --
> > Best regards,
> >
> > Ihor Dvoretskyi,
> > OpenStack Operations Engineer
> >
> > ---
> >
> > Mirantis, Inc. (925) 808-FUEL
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 
> --
> Zhipeng (Howard) Huang
> 
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
> 
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
> 
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Steve Gordon,
Principal Product Manager,
Red Hat OpenStack Platform

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kubernetes] Introducing the Kubernetes OpenStack Special Interest Group

2016-04-22 Thread Zhipeng Huang
Hi Ihor,

This is a great news ! As a matter of fact at the moment the Tricircle team
from OpenStack is working with Ubernetes team lead by Quinton to explore
features about how OpenStack federation could help K8s federation works
better. This SIG seems like a good fit for our ongoing work.

Will you guys have a session regarding this at the summit ? How could we
approach you guys ?

On Sat, Apr 23, 2016 at 8:19 AM, Ihor Dvoretskyi 
wrote:

> Colleagues, I'm happy to announce within the OpenStack community the
> Kubernetes OpenStack Special Interest Group.
>
> Kubernetes Community currently runs to the deeper integration between
> OpenStack and Kubernetes and one of the main aims now - to work on enabling
> OpenStack as a platform for running Kubernetes clusters; and Kubernetes as
> the underlying layer for running OpenStack workloads.
>
> Steve Gordon and I have prepared a blog post, which briefly describes our
> activities within the community - [1].
>
> If you have any questions or suggestions regarding the Kubernetes and
> OpenStack-related activities, don't hesitate to join us - [2]. And of
> course, you may reach us on the OpenStack Summit'16 in Austin!
>
> [1]
> http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html
> [2] https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack
>
> --
> Best regards,
>
> Ihor Dvoretskyi,
> OpenStack Operations Engineer
>
> ---
>
> Mirantis, Inc. (925) 808-FUEL
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kubernetes] Introducing the Kubernetes OpenStack Special Interest Group

2016-04-22 Thread Ihor Dvoretskyi
Colleagues, I'm happy to announce within the OpenStack community the
Kubernetes OpenStack Special Interest Group.

Kubernetes Community currently runs to the deeper integration between
OpenStack and Kubernetes and one of the main aims now - to work on enabling
OpenStack as a platform for running Kubernetes clusters; and Kubernetes as
the underlying layer for running OpenStack workloads.

Steve Gordon and I have prepared a blog post, which briefly describes our
activities within the community - [1].

If you have any questions or suggestions regarding the Kubernetes and
OpenStack-related activities, don't hesitate to join us - [2]. And of
course, you may reach us on the OpenStack Summit'16 in Austin!

[1]
http://blog.kubernetes.io/2016/04/introducing-kubernetes-openstack-sig.html
[2] https://github.com/kubernetes/kubernetes/wiki/SIG-Openstack

-- 
Best regards,

Ihor Dvoretskyi,
OpenStack Operations Engineer

---

Mirantis, Inc. (925) 808-FUEL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev