[openstack-dev] [magnum] Maintaining cluster API in upgrades

2015-09-14 Thread Ryan Rossiter
I have some food for thought with regards to upgrades that was provoked 
by some incorrect usage of Magnum which led me to finding [1].


Let's say we're running a cloud with Liberty Magnum, which works with 
Kubernetes API v1. During the Mitaka release, Kubernetes released v2, so 
now Magnum conductor in Mitaka works with Kubernetes v2 API. What would 
happen if I upgrade from L to M with Magnum? My existing Magnum/k8s 
stuff will be on v1, so having Mitaka conductor attempt to interact with 
that stuff will cause it to blow up right? The k8s API calls will fail 
because the communicating components are using differing versions of the 
API (assuming there are backwards incompatibilities).


I'm running through some suggestions in my head in order to handle this:

1. Have conductor maintain all supported older versions of k8s, and do 
API discovery to figure out which version of the API to use

  - This one sounds like a total headache from a code management standpoint

2. Do some sort of heat stack update to upgrade all existing clusters to 
use the current version of the API
  - In my head, this would work kind of like a database migration, but 
it seems like it would be a lot harder


3. Maintain cluster clients outside of the Magnum tree
  - This would make maintaining the client compatibilities a lot easier
  - Would help eliminate the cruft of merging 48k lines for a swagger 
generated client [2]

  - Having the client outside of tree would allow for a simple pip install
  - Not sure if this *actually* solves the problem above

This isn't meant to be a "we need to change this" topic, it's just meant 
to be more of a "what if" discussion. I am also up for suggestions other 
than the 3 above.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074448.html

[2] https://review.openstack.org/#/c/217427/

--
Thanks,

Ryan Rossiter (rlrossit)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Maintaining cluster API in upgrades

2015-09-14 Thread Hongbin Lu
Hi Ryan,

I think pushing python k8sclient out of magnum tree (option 3) is the decision, 
which was made in Vancouver Summit (if I remembered correctly). It definitely 
helps for solving the k8s versioning problems.

Best regards,
Hongbin

From: Ryan Rossiter [mailto:rlros...@linux.vnet.ibm.com]
Sent: September-14-15 6:49 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Maintaining cluster API in upgrades

I have some food for thought with regards to upgrades that was provoked by some 
incorrect usage of Magnum which led me to finding [1].

Let's say we're running a cloud with Liberty Magnum, which works with 
Kubernetes API v1. During the Mitaka release, Kubernetes released v2, so now 
Magnum conductor in Mitaka works with Kubernetes v2 API. What would happen if I 
upgrade from L to M with Magnum? My existing Magnum/k8s stuff will be on v1, so 
having Mitaka conductor attempt to interact with that stuff will cause it to 
blow up right? The k8s API calls will fail because the communicating components 
are using differing versions of the API (assuming there are backwards 
incompatibilities).

I'm running through some suggestions in my head in order to handle this:

1. Have conductor maintain all supported older versions of k8s, and do API 
discovery to figure out which version of the API to use
  - This one sounds like a total headache from a code management standpoint

2. Do some sort of heat stack update to upgrade all existing clusters to use 
the current version of the API
  - In my head, this would work kind of like a database migration, but it seems 
like it would be a lot harder

3. Maintain cluster clients outside of the Magnum tree
  - This would make maintaining the client compatibilities a lot easier
  - Would help eliminate the cruft of merging 48k lines for a swagger generated 
client [2]
  - Having the client outside of tree would allow for a simple pip install
  - Not sure if this *actually* solves the problem above

This isn't meant to be a "we need to change this" topic, it's just meant to be 
more of a "what if" discussion. I am also up for suggestions other than the 3 
above.

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074448.html
[2] https://review.openstack.org/#/c/217427/


--

Thanks,



Ryan Rossiter (rlrossit)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev