[jira] [Created] (MAPREDUCE-6754) Container Ids for a yarn application should be monotonically increasing in the scope of the application

2016-08-12 Thread Srikanth Sampath (JIRA)
Srikanth Sampath created MAPREDUCE-6754:
---

 Summary: Container Ids for a yarn application should be 
monotonically increasing in the scope of the application
 Key: MAPREDUCE-6754
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6754
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
Reporter: Srikanth Sampath
Assignee: Srikanth Sampath


Currently across application attempts, container Ids are reused.  The container 
id is stored in AppSchedulingInfo and it is reinitialized with every 
application attempt.  So the containerId scope is limited to the application 
attempt.

In the MR Framework, It is important to note that the containerId is being used 
as part of the JvmId.  JvmId has 3 components .  
The JvmId is used in datastructures in TaskAttemptListener and is passed 
between the AppMaster and the individual tasks.  For an application attempt, no 
two tasks have the same JvmId.

This works well currently, since inflight tasks get killed if the AppMaster 
goes down.  However, if we want to enable WorkPreserving nature for the AM, 
containers (and hence containerIds) live across application attempts.  If we 
recycle containerIds across attempts, then two independent tasks (one inflight 
from a previous attempt  and another new in a succeeding attempt) can have the 
same JvmId and cause havoc.

This can be solved in two ways:

*Approach A*: Include attempt id as part of the JvmId. This is a viable 
solution, however, there is a change in the format of the JVMid. Changing 
something that has existed so long for an optional feature is not persuasive.

*Approach B*: Keep the container id to be a monotonically increasing id for the 
life of an application. So, container ids are not reused across application 
attempts containers should be able to outlive an application attempt. This is 
the preferred approach as it is clean, logical and is backwards compatible. 
Nothing changes for existing applications or the internal workings.  
*How this is achieved:*
Currently, we maintain latest containerId only for application attempts and 
reinitialize for new attempts.  With this approach, we retrieve the latest 
containerId from the just-failed attempt and initialize the new attempt with 
the latest containerId (instead of 0).   I can provide the patch if it helps.  
It currently exists in MAPREDUCE-6726



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



Re: [Release thread] 2.6.5 release activities

2016-08-12 Thread Chris Trezzo
Thanks everyone for the discussion! Based on what has been said in this
thread, I will move forward on the preparation efforts (working with
Sangjin and the community) for a 2.6.5 release. If there is a strong
objection to this, please let us know.

I see three initial tasks going forward:

1. Fix the pre-commit build for branch-2.6. I am not entirely sure what is
involved here, but will start looking into it using HADOOP-12800 as a
starting point.

2. Look through the unresolved issues still targeted to 2.6.5 and
resolve/re-target as necessary. There are currently 17 of them:
https://issues.apache.org/jira/issues/?jql=(project%20%
3D%20%22Hadoop%20Common%22%20OR%20project%20%3D%20%22Hadoop%20YARN%22%20OR%
20project%20%3D%20%22Hadoop%20Map%2FReduce%22%20OR%
20project%20%3D%20%22Hadoop%20HDFS%22)%20AND%20(
fixVersion%20%3D%202.6.5%20OR%20%22Target%20Version%2Fs%22%
20%3D%202.6.5)%20AND%20(status%20!%3D%20Resolved%20AND%20status%20!%3D%
20Closed)%20ORDER%20BY%20status%20ASC

3. Look through the backport candidates in the spreadsheet in more detail
and backport/drop as necessary. There are currently 15 of them:
https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3olWpOCo6EBAey1WYC
8hTRUemHvYPPzY/edit?usp=sharing

Finally, I think it would be awesome to continue the release end-of-life
discussion. As we get better and better at release cadence it is going to
be really helpful to have an established EOL policy. It will be less
confusing for contributors and help set clear expectations for end users as
to when they need to start making reasonable migration plans, especially if
they want to stay on a well supported release line. I would be happy to
help with this effort as well. It would be great if we could leverage that
discussion and have EOL information for the 2.6 line when we release 2.6.5.

As always, please let me know if I have missed something.

Thanks!
Chris Trezzo

On Thu, Aug 11, 2016 at 1:03 PM, Karthik Kambatla 
wrote:

> Since there is sufficient interest in 2.6.5, we should probably do it. All
> the reasons Allen outlines make sense.
>
> That said, Junping brings up a very important point that we should think of
> for future releases. For a new user or a user that does not directly
> contribute to the project, more stable releases make it hard to pick from.
>
> As Chris T mentioned, the notion of EOL for our releases seems like a good
> idea. However, to come up with any EOLs, we need to have some sort of
> cadence for the releases. While this is hard for major releases (big bang,
> potentially incompatible features), it should be doable for minor releases.
>
> How do people feel about doing a minor release every 6 months, with
> follow-up maintenance releases every 2 months until the next minor and as
> needed after that? That way, we could EOL a minor release a year after its
> initial release? In the future, we could consider shrinking this window. In
> addition to the EOL, this also makes our releases a little more predictable
> for both users and vendors. Note that 2.6.0 went out almost 2 years ago and
> we haven't had a new minor release in 14 months. I am happy to start
> another DISCUSS thread around this if people think it is useful.
>
> Thanks
> Karthik
>
> On Thu, Aug 11, 2016 at 12:50 PM, Allen Wittenauer <
> a...@effectivemachines.com
> > wrote:
>
> >
> > > On Aug 11, 2016, at 8:10 AM, Junping Du  wrote:
> > >
> > > Allen, to be clear, I am not against any branch release effort here.
> > However,
> >
> > "I'm not an X but "
> >
> > > as RM for previous releases 2.6.3 and 2.6.4, I feel to have
> > responsibility to take care branch-2.6 together with other RMs (Vinod and
> > Sangjin) on this branch and understand current gap - especially, to get
> > consensus from community on the future plan for 2.6.x.
> > > Our bylaw give us freedom for anyone to do release effort, but our
> bylaw
> > doesn't stop our rights for reasonable question/concern on any release
> > plan. As you mentioned below, people can potentially fire up branch-1
> > release effort. But if you call a release plan tomorrow for branch-1, I
> > cannot imagine nobody will question on that effort. Isn't it?
> >
> > From previous discussions I've seen around releases, I
> > think it would depend upon which employee from which vendor raised the
> > question.
> >
> > > Let's keep discussions on releasing 2.6.5 more technical. IMO, to make
> > 2.6.5 release more reasonable, shouldn't we check following questions
> first?
> > > 1. Do we have any significant issues that should land on 2.6.5
> comparing
> > with 2.6.4?
> > > 2. If so, any technical reasons (like: upgrade is not smoothly,
> > performance downgrade, incompatibility with downstream projects, etc.) to
> > stop our users to move from 2.6.4 to 2.7.2/2.7.3?
> > > I believe having good answer on these questions can make our release
> > plan more reasonable to the whole community. More thoughts?
> 

[DISCUSS] Release cadence and EOL

2016-08-12 Thread Karthik Kambatla
Forking off this discussion from 2.6.5 release thread. Junping and Chris T
have brought up important concerns regarding too many concurrent releases
and the lack of EOL for our releases.

First up, it would be nice to hear from others on our releases having the
notion of EOL and other predictability is indeed of interest.

Secondly, I believe EOLs work better in conjunction with a predictable
cadence. Given past discussions on this and current periods between our
minor releases, I would like to propose a minor release on the latest major
line every 6 months and a maintenance release on the latest minor release
every 2 months.

Eager to hear others thoughts.

Thanks
Karthik


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-12 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/

[Aug 12, 2016 7:05:52 AM] (kai.zheng) HADOOP-11588. Benchmark framework and 
test for erasure coders.
[Aug 11, 2016 6:57:43 PM] (weichiu) HADOOP-13441. Document LdapGroupsMapping 
keystore password properties.
[Aug 11, 2016 7:27:09 PM] (weichiu) HADOOP-13190. Mention 
LoadBalancingKMSClientProvider in KMS HA
[Aug 11, 2016 7:39:41 PM] (naganarasimha_gr) YARN-4833. For Queue 
AccessControlException client retries multiple
[Aug 12, 2016 2:56:58 AM] (sjlee) HADOOP-13410. RunJar adds the content of the 
jar twice to the classpath
[Aug 13, 2016 5:52:37 AM] (kai.zheng) HDFS-8668. Erasure Coding: revisit buffer 
used for encoding and




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.yarn.logaggregation.TestAggregatedLogFormat 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestYarnClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/131/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org

Re: [Release thread] 2.6.5 release activities

2016-08-12 Thread Junping Du
I am OK with going forward for 2.6.5 release given some people may not be ready 
for migration to 2.7 and we have done some preparation work already. Thanks 
Chris for pushing the release work forward!
About future releases of 2.6 (after 2.6.5) or any other legacy releases, I 
think it worth more discussions. I disagree with what Allen said below - Java 6 
support should be a solid reason for branch-2.6 release keep going. In this 
community, we are so aggressive to drop Java 7 support in 3.0.x release. Here, 
why we are so conservative to keep releasing new bits to support Java 6? Our 
release effort, although driven by different person, should be consistent 
logically. 
I don't think we have clear rules/polices about EOL of legacy releases before. 
This is a bit off topic here. Will start a new thread later for more discussion.

Thanks,

Junping


From: Chris Trezzo 
Sent: Friday, August 12, 2016 12:42 AM
To: Karthik Kambatla
Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [Release thread] 2.6.5 release activities

Thanks everyone for the discussion! Based on what has been said in this
thread, I will move forward on the preparation efforts (working with
Sangjin and the community) for a 2.6.5 release. If there is a strong
objection to this, please let us know.

I see three initial tasks going forward:

1. Fix the pre-commit build for branch-2.6. I am not entirely sure what is
involved here, but will start looking into it using HADOOP-12800 as a
starting point.

2. Look through the unresolved issues still targeted to 2.6.5 and
resolve/re-target as necessary. There are currently 17 of them:
https://issues.apache.org/jira/issues/?jql=(project%20%
3D%20%22Hadoop%20Common%22%20OR%20project%20%3D%20%22Hadoop%20YARN%22%20OR%
20project%20%3D%20%22Hadoop%20Map%2FReduce%22%20OR%
20project%20%3D%20%22Hadoop%20HDFS%22)%20AND%20(
fixVersion%20%3D%202.6.5%20OR%20%22Target%20Version%2Fs%22%
20%3D%202.6.5)%20AND%20(status%20!%3D%20Resolved%20AND%20status%20!%3D%
20Closed)%20ORDER%20BY%20status%20ASC

3. Look through the backport candidates in the spreadsheet in more detail
and backport/drop as necessary. There are currently 15 of them:
https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3olWpOCo6EBAey1WYC
8hTRUemHvYPPzY/edit?usp=sharing

Finally, I think it would be awesome to continue the release end-of-life
discussion. As we get better and better at release cadence it is going to
be really helpful to have an established EOL policy. It will be less
confusing for contributors and help set clear expectations for end users as
to when they need to start making reasonable migration plans, especially if
they want to stay on a well supported release line. I would be happy to
help with this effort as well. It would be great if we could leverage that
discussion and have EOL information for the 2.6 line when we release 2.6.5.

As always, please let me know if I have missed something.

Thanks!
Chris Trezzo

On Thu, Aug 11, 2016 at 1:03 PM, Karthik Kambatla 
wrote:

> Since there is sufficient interest in 2.6.5, we should probably do it. All
> the reasons Allen outlines make sense.
>
> That said, Junping brings up a very important point that we should think of
> for future releases. For a new user or a user that does not directly
> contribute to the project, more stable releases make it hard to pick from.
>
> As Chris T mentioned, the notion of EOL for our releases seems like a good
> idea. However, to come up with any EOLs, we need to have some sort of
> cadence for the releases. While this is hard for major releases (big bang,
> potentially incompatible features), it should be doable for minor releases.
>
> How do people feel about doing a minor release every 6 months, with
> follow-up maintenance releases every 2 months until the next minor and as
> needed after that? That way, we could EOL a minor release a year after its
> initial release? In the future, we could consider shrinking this window. In
> addition to the EOL, this also makes our releases a little more predictable
> for both users and vendors. Note that 2.6.0 went out almost 2 years ago and
> we haven't had a new minor release in 14 months. I am happy to start
> another DISCUSS thread around this if people think it is useful.
>
> Thanks
> Karthik
>
> On Thu, Aug 11, 2016 at 12:50 PM, Allen Wittenauer <
> a...@effectivemachines.com
> > wrote:
>
> >
> > > On Aug 11, 2016, at 8:10 AM, Junping Du  wrote:
> > >
> > > Allen, to be clear, I am not against any branch release effort here.
> > However,
> >
> > "I'm not an X but "
> >
> > > as RM for previous releases 2.6.3 and 2.6.4, I feel to have
> > responsibility to take care branch-2.6 together with other RMs (Vinod and
> > Sangjin) on this branch and understand current gap - especially, to get
> > consensus from 

[VOTE] Release Apache Hadoop 2.7.3 RC1

2016-08-12 Thread Vinod Kumar Vavilapalli
Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.3.

As discussed before, this is the next maintenance release to follow up 2.7.2.

The RC is available for validation at: 
http://home.apache.org/~vinodkv/hadoop-2.7.3-RC1/ 


The RC tag in git is: release-2.7.3-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1045/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at home.apache.org/~vinodkv/hadoop-2.7.3-RC1/releasenotes.html 
 for your 
quick perusal.

As you may have noted,
 - few issues with RC0 forced a RC1 [1]
 - a very long fix-cycle for the License & Notice issues (HADOOP-12893) caused 
2.7.3 (along with every other Hadoop release) to slip by quite a bit. This 
release's related discussion thread is linked below: [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1] [VOTE] Release Apache Hadoop 2.7.3 RC0: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/index.html#26106 

[2]: 2.7.3 release plan: 
https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/msg24439.html 


Re: [Release thread] 2.6.5 release activities

2016-08-12 Thread Kihwal Lee
You might want the snapshot bug fix done in HDFS-7056. This bug creates 
snapshot filediffs even if you never use snapshot. For 2.6, we will have to do 
it in a separate jira to pick up the fix only. Related to this, HDFS-9696 might 
be of interest too.
Kihwal

  From: Chris Trezzo 
 To: Karthik Kambatla  
Cc: "common-...@hadoop.apache.org" ; 
"hdfs-...@hadoop.apache.org" ; 
"mapreduce-dev@hadoop.apache.org" ; 
"yarn-...@hadoop.apache.org" 
 Sent: Thursday, August 11, 2016 6:42 PM
 Subject: Re: [Release thread] 2.6.5 release activities
   
Thanks everyone for the discussion! Based on what has been said in this
thread, I will move forward on the preparation efforts (working with
Sangjin and the community) for a 2.6.5 release. If there is a strong
objection to this, please let us know.

I see three initial tasks going forward:

1. Fix the pre-commit build for branch-2.6. I am not entirely sure what is
involved here, but will start looking into it using HADOOP-12800 as a
starting point.

2. Look through the unresolved issues still targeted to 2.6.5 and
resolve/re-target as necessary. There are currently 17 of them:
https://issues.apache.org/jira/issues/?jql=(project%20%
3D%20%22Hadoop%20Common%22%20OR%20project%20%3D%20%22Hadoop%20YARN%22%20OR%
20project%20%3D%20%22Hadoop%20Map%2FReduce%22%20OR%
20project%20%3D%20%22Hadoop%20HDFS%22)%20AND%20(
fixVersion%20%3D%202.6.5%20OR%20%22Target%20Version%2Fs%22%
20%3D%202.6.5)%20AND%20(status%20!%3D%20Resolved%20AND%20status%20!%3D%
20Closed)%20ORDER%20BY%20status%20ASC

3. Look through the backport candidates in the spreadsheet in more detail
and backport/drop as necessary. There are currently 15 of them:
https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3olWpOCo6EBAey1WYC
8hTRUemHvYPPzY/edit?usp=sharing

Finally, I think it would be awesome to continue the release end-of-life
discussion. As we get better and better at release cadence it is going to
be really helpful to have an established EOL policy. It will be less
confusing for contributors and help set clear expectations for end users as
to when they need to start making reasonable migration plans, especially if
they want to stay on a well supported release line. I would be happy to
help with this effort as well. It would be great if we could leverage that
discussion and have EOL information for the 2.6 line when we release 2.6.5.

As always, please let me know if I have missed something.

Thanks!
Chris Trezzo

On Thu, Aug 11, 2016 at 1:03 PM, Karthik Kambatla 
wrote:

> Since there is sufficient interest in 2.6.5, we should probably do it. All
> the reasons Allen outlines make sense.
>
> That said, Junping brings up a very important point that we should think of
> for future releases. For a new user or a user that does not directly
> contribute to the project, more stable releases make it hard to pick from.
>
> As Chris T mentioned, the notion of EOL for our releases seems like a good
> idea. However, to come up with any EOLs, we need to have some sort of
> cadence for the releases. While this is hard for major releases (big bang,
> potentially incompatible features), it should be doable for minor releases.
>
> How do people feel about doing a minor release every 6 months, with
> follow-up maintenance releases every 2 months until the next minor and as
> needed after that? That way, we could EOL a minor release a year after its
> initial release? In the future, we could consider shrinking this window. In
> addition to the EOL, this also makes our releases a little more predictable
> for both users and vendors. Note that 2.6.0 went out almost 2 years ago and
> we haven't had a new minor release in 14 months. I am happy to start
> another DISCUSS thread around this if people think it is useful.
>
> Thanks
> Karthik
>
> On Thu, Aug 11, 2016 at 12:50 PM, Allen Wittenauer <
> a...@effectivemachines.com
> > wrote:
>
> >
> > > On Aug 11, 2016, at 8:10 AM, Junping Du  wrote:
> > >
> > > Allen, to be clear, I am not against any branch release effort here.
> > However,
> >
> >                "I'm not an X but "
> >
> > > as RM for previous releases 2.6.3 and 2.6.4, I feel to have
> > responsibility to take care branch-2.6 together with other RMs (Vinod and
> > Sangjin) on this branch and understand current gap - especially, to get
> > consensus from community on the future plan for 2.6.x.
> > > Our bylaw give us freedom for anyone to do release effort, but our
> bylaw
> > doesn't stop our rights for reasonable question/concern on any release
> > plan. As you mentioned below, people can potentially fire up branch-1
> > release effort. But if you call a release plan tomorrow for branch-1, I
> > cannot imagine nobody will question on that effort. Isn't it?
> >
> >                From previous discussions I've seen 

Re: [Release thread] 2.6.5 release activities

2016-08-12 Thread Sangjin Lee
Thanks folks for opening up the discussion on our EOL policy. That's also
exactly what I wanted to discuss when I opened up the 2.6.5 discussion:

I also want to gauge the community's interest in maintaining the 2.6.x
> line. How long do we maintain this line? What would be a sensible EOL
> policy? Note that as the main code lines start diverging (java version,
> features, etc.), the cost of maintaining multiple release lines does go up.
> I'd love to hear your thoughts.


I look forward to that discussion thread. As for 2.6.5, since there is
enough interest in it, I'll work with Chris on that.

Regards,
Sangjin

On Fri, Aug 12, 2016 at 8:19 AM, Junping Du  wrote:

> I am OK with going forward for 2.6.5 release given some people may not be
> ready for migration to 2.7 and we have done some preparation work already.
> Thanks Chris for pushing the release work forward!
> About future releases of 2.6 (after 2.6.5) or any other legacy releases, I
> think it worth more discussions. I disagree with what Allen said below -
> Java 6 support should be a solid reason for branch-2.6 release keep going.
> In this community, we are so aggressive to drop Java 7 support in 3.0.x
> release. Here, why we are so conservative to keep releasing new bits to
> support Java 6? Our release effort, although driven by different person,
> should be consistent logically.
> I don't think we have clear rules/polices about EOL of legacy releases
> before. This is a bit off topic here. Will start a new thread later for
> more discussion.
>
> Thanks,
>
> Junping
>
> 
> From: Chris Trezzo 
> Sent: Friday, August 12, 2016 12:42 AM
> To: Karthik Kambatla
> Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> mapreduce-dev@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: Re: [Release thread] 2.6.5 release activities
>
> Thanks everyone for the discussion! Based on what has been said in this
> thread, I will move forward on the preparation efforts (working with
> Sangjin and the community) for a 2.6.5 release. If there is a strong
> objection to this, please let us know.
>
> I see three initial tasks going forward:
>
> 1. Fix the pre-commit build for branch-2.6. I am not entirely sure what is
> involved here, but will start looking into it using HADOOP-12800 as a
> starting point.
>
> 2. Look through the unresolved issues still targeted to 2.6.5 and
> resolve/re-target as necessary. There are currently 17 of them:
> https://issues.apache.org/jira/issues/?jql=(project%20%
> 3D%20%22Hadoop%20Common%22%20OR%20project%20%3D%20%
> 22Hadoop%20YARN%22%20OR%
> 20project%20%3D%20%22Hadoop%20Map%2FReduce%22%20OR%
> 20project%20%3D%20%22Hadoop%20HDFS%22)%20AND%20(
> fixVersion%20%3D%202.6.5%20OR%20%22Target%20Version%2Fs%22%
> 20%3D%202.6.5)%20AND%20(status%20!%3D%20Resolved%20AND%20status%20!%3D%
> 20Closed)%20ORDER%20BY%20status%20ASC
>
> 3. Look through the backport candidates in the spreadsheet in more detail
> and backport/drop as necessary. There are currently 15 of them:
> https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3olWpOCo6EBAey1WYC
> 8hTRUemHvYPPzY/edit?usp=sharing
>
> Finally, I think it would be awesome to continue the release end-of-life
> discussion. As we get better and better at release cadence it is going to
> be really helpful to have an established EOL policy. It will be less
> confusing for contributors and help set clear expectations for end users as
> to when they need to start making reasonable migration plans, especially if
> they want to stay on a well supported release line. I would be happy to
> help with this effort as well. It would be great if we could leverage that
> discussion and have EOL information for the 2.6 line when we release 2.6.5.
>
> As always, please let me know if I have missed something.
>
> Thanks!
> Chris Trezzo
>
> On Thu, Aug 11, 2016 at 1:03 PM, Karthik Kambatla 
> wrote:
>
> > Since there is sufficient interest in 2.6.5, we should probably do it.
> All
> > the reasons Allen outlines make sense.
> >
> > That said, Junping brings up a very important point that we should think
> of
> > for future releases. For a new user or a user that does not directly
> > contribute to the project, more stable releases make it hard to pick
> from.
> >
> > As Chris T mentioned, the notion of EOL for our releases seems like a
> good
> > idea. However, to come up with any EOLs, we need to have some sort of
> > cadence for the releases. While this is hard for major releases (big
> bang,
> > potentially incompatible features), it should be doable for minor
> releases.
> >
> > How do people feel about doing a minor release every 6 months, with
> > follow-up maintenance releases every 2 months until the next minor and as
> > needed after that? That way, we could EOL a minor release a year after
> its
> > initial release? In the future, we could consider shrinking this window.
> In
> > addition to the