On Fri, Sep 8, 2017 at 1:04 PM Amador Pahim <[email protected]> wrote:
> On Fri, Sep 8, 2017 at 12:14 PM, Lucas Meneghel Rodrigues > <[email protected]> wrote: > > Hi Guannan, > > > > Since 52.0 is the new LTS release, and since we usually only backport > > bugfixes, not features, I'd ask you if porting your stuff to 52.0 is an > > option. > > > > Because the request, as is, is something that I [1] would like to avoid. > > Please let me know of your current needs and let's see how we can > accomodate > > them. > > > > [1] Of course, we need to hear the opinion of the other maintainers. > Guys, > > let me know what are your thoughts. > > > > Yes, I was thinking about that. All your preassumptions are perfect: > we don't backport features, 52 is the 'current' LTS and so on. On the > other hand, v36 is still supported (until Dec 27/2017, at least) and > it's the only LTS that supports RHEL6, so upgrading to v52 is not an > option for him. > > Even if we consider it a "support exception" and backport that > feature, it will solve a very specific problem with a workflow that > can not be replicated after the v36 EOL. And RHEL6 is not even close > to the EOL. According to > https://access.redhat.com/support/policy/updates/errata, RHEL6 can be > someway supported up to 2024(!). > > So, Guannan, instead of claiming "support exception" for this feature, > I'd recommend you to create/maintain an internal lib with all the > features needed for your tests, even if that's just to backport > upstream features like that one for your never-ending RHEL6 testing > environment. How does that sound? > That's a great idea. Call it .backport and put your stuff there in case you can't port your tests to 52.0. > > > Cheers. > > > > On Fri, Sep 8, 2017 at 8:42 AM Guannan Sun <[email protected]> wrote: > >> > >> Hi, > >> > >> As RHEL6 still need use 36lts, and cases updated with using the function > >> in PR 1376: > >> > >> https://github.com/avocado-framework/avocado/pull/1376 > >> > >> could you help backport the commits to 36lts? > >> > >> Thanks! > >> Guannan >
