I see your point about 1.10 and the difficulty of upgrading to 2.x and
Hadoop 3. I would be in favor of doing a release of 1.10 and releasing
that as the first LTS to replace 1.9 if we limit the changes between
1.9 and 1.10 to the following:

1. Update Java minimum requirements to Java 8
2. Make Hadoop 3 the default in the build instead of Hadoop 2 (but
still support Hadoop 2 builds) [negotiable]

This assumes we would release 1.10.0 *very* soon, and that we don't
make any other changes (build/release tooling to facilitate a release
being the exception). This also assumes we would abandon a 1.9.4
release and release those bug fixes in 1.10.0 instead.

If we do this, I think we'd probably want to shorten the 3 year
support window a little bit for 1.10 (maybe 18 or 24 months, but we
still EOL around 1 year after the next LTS), so we don't end up
overlapping with too many LTS releases in the future we're
simultaneously supporting... but just for this first one.

The one roadblock that I see to the 1.10 LTS proposal would be that we
have already litigated the decision to move to Java 8 several times in
the past, and it was previously agreed that this change would occur in
2.x, and *not* in 1.x. If you wish to have 1.10 as the LTS with Java
8, I think you should do a [VOTE] thread in the dev list to ensure
community consensus on that, before moving forward here.

On Thu, Oct 31, 2019 at 12:48 PM Ed Coleman <[email protected]> wrote:
>
> Another dimension to this discussion that I'd like to address is the
> provision for a 1.10 version.  In fact, I lean towards having 1.10
> nominated as the pre-2.x LTS version instead of a 1.9.x.  I am in favor of
> the basic LTS proposal, but I think that additional accommodations to ease
> the pre-2.x to a 2.x upgrade path must be considered before any adoption of
> an LTS plan.
>
> The largest change that I'd like to propose for 1.10 is that the minimum
> Java language version be bumped to java 8 so that merging code between
> versions can use the same language constructs.  As it is now, code written
> for 1.9.x cannot use lambda, streams,... all of the "modern" features.
> Merging the code forward, one is left with the option of not using those
> features, or changing the code which, if not done perfectly, could
> introduce a different set of bugs between versions.  Likewise, if someone
> wanted to back port a feature from 2.x into the 1.9.x code base, additional
> changes, beyond those required because of 2.x restructuring are likely to
> be necessary.
>
> The migration from Accumulo 1.9.x to a 2.x is not straight forward and will
> require changes to Accumulo clients. However, the largest obstacle to
> upgrading to 2.x is with the Hadoop 3 requirement.  This is a major,
> non-trival requirement change that is going to take significant effort (and
> time) for a large-scale deployments to develop to and then upgrade to
> Hadoop 3.  There is going to be significant work required to adequately
> test necessary client changes, and then upgrade the deployed systems, first
> to Hadoop 3 and then to Accumulo 2.x. And until they can, they are going to
> be on a pre-2.x Accumulo version.
>
> With code frozen at 1.9.x, large deployments are going to need to make some
> hard decisions - do they continue to use 1.9.x as released, or do they make
> some patched Frankenstein version?  If they find that they aggressively
> need to patch to get features that improve current operations, how much
> additional work is going to be required if / when they are in a position to
> upgrade? How much of that work would further delay upgrading to Hadoop 3 /
> Accumulo 2.x?
>
> Having features released by the community eases support across the whole
> ecosystem. We will all have access to the same code base, the code will be
> exercised by the continuous integration tests, and it provides greater
> insurance that those features will be available once an upgrade to 2.x is
> possible. Otherwise, reasoning about what "version" is actually running and
> what that implies when requesting support from the community is just that
> much harder for everyone.
>
> My opinion is that if we can accommodate some feature improvements as
> groups work to adopting a Hadoop 3 / 2.x deployment, then we can reduce the
> work required across the community and the users, work that freezing at
> 1.9.x for pre-2.x would introduce an additional burdens on the users.
>
> I am in favor of adopting an LTS, but I think we really need to consider
> the impact of requiring Hadoop 3 is having on upgrading to Accumulo 2.x in
> the LTS plan.

Reply via email to