Hi all,

I’d like to open a discussion around how we support upstream Kubernetes 
releases.

Up until now, there has been a somewhat unofficial policy of supporting 3 
releases at any one time (i.e. 1.19, 1.20, 1.21). This, in fact, was the 
documented set of releases supported by the most recent YuniKorn 0.12.1.

We’ve also cut a new 0.12.2 release (IPMC vote pending) with the express 
purpose of supporting Kubernetes 1.22 and 1.23 as well, since these versions 
are beginning to be picked up by cloud providers. As of this release, for the 
first time, we officially support 5 releases (1.19, 1.20, 1.21, 1.22, 1.23). 

Currently, we have e2e tests running in master that test against all 5 versions.

The question is now, how do we move forward? Some items to discuss:

- Cloud vendors are increasingly supporting newer versions very rapidly. This 
is likely due to Kubernetes’ rapid deprecation of old versions (in fact, 1.19 
is already end-of-life as of December 2021). Should we immediately drop support 
for EOL releases, even though these may still be in use (sometimes widely) in 
the wild?

- There has been some concerns raised that supporting too many versions at once 
can be a burden. How and when should we drop support for older releases?

- How and when should we attempt to support new releases?

In the interests of kicking off this discussion, I’ll share a few of my 
thoughts.

There are two aspects to compatibility — source compatibility, and binary 
compatibility. We currently build with Kubernetes 1.20, so we have source-level 
compatibility there. However, at runtime, we can support 1.19-1.23 currently 
(and likely future versions) without code changes.

Historically, it has been challenging to support new K8s releases due to our 
use internally of several v1alpha / v1beta APIs. With the release of 0.12.2 
(and master) this is no longer the case, and for the most part, we rely on 
stable (v1) APIs almost exclusively, meaning our chances of “breaking” on a new 
release are lower than they have ever been. It is often just a matter of 
updating our e2e test runs to include the new K8S version, and updating 
documentation on the release / helm charts to document that those versions have 
been tested.

When should be add support? In my opinion, as soon as a Kubernetes release is 
stable, we should immediately start testing against it (this means a version 
bump every 6 months or so). This will help us catch bugs quickly.

When should we drop support? 1.19 is already EOL, but is still largely in use. 
However, by the time we ship YuniKorn 1.0, it may not be. I think we could 
update our release documentation to reflect the versions that YuniKorn has been 
*tested* against (as opposed to supporting). In other words, it might (and 
probably will) still work with 1.19, but we don’t officially support it and it 
hasn’t been tested. This would also allow us to keep our e2e test coverage a 
little smaller (in this case 4 releases).

To summarize, I guess I’m effectively proposing testing against all current, 
non-EOL Kubernetes releases that are current as of a particular YuniKorn 
release, and having master track the latest available K8S releases as soon as 
possible. For all intents and purposes, this means supporting roughly 4 
releases at once.

There is one other concern, and that is a case where for technical reasons, we 
cannot support as many releases. I think we have to treat this case-by-case. 
For example, the recent releases had to drop compatibility with 1.18 and older 
because several v1beta APIs had been dropped in 1.22, but the v1 APIs did not 
exist until 1.19. This made a release that supported both 1.18 and 1.22 
impossible. This may happen in the future, but again, we don’t depend on nearly 
as many unstable APIs, so the chance of this is low.


I would love to hear what others think about this, and am looking forward to a 
healthy discussion.

Thanks,

Craig





---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to