Re: [Release thread] 2.6.5 release activities

2016-09-14 Thread Sangjin Lee
We ported 16 issues to branch-2.6. We will go ahead and start the release
process, including cutting the release branch. If you have any critical
change that should be made part of 2.6.5, please reach out to us and commit
the changes. Thanks!

Sangjin

On Mon, Sep 12, 2016 at 3:24 PM, Sangjin Lee  wrote:

> Thanks Chris!
>
> I'll help Chris to get those JIRAs marked in his spreadsheet committed.
> We'll cut the release branch shortly after that. If you have any critical
> change that should be made part of 2.6.5 (CVE patches included), please
> reach out to us and commit the changes. If all things go well, we'd like to
> cut the branch in a few days.
>
> Thanks,
> Sangjin
>
> On Fri, Sep 9, 2016 at 1:24 PM, Chris Trezzo  wrote:
>
>> Hi all,
>>
>> I wanted to give an update on the Hadoop 2.6.5 release efforts.
>>
>> Here is what has been done so far:
>>
>> 1. I have gone through all of the potential backports and recorded the
>> commit hashes for each of them from the branch that seems the most
>> appropriate (i.e. if there was a backport to 2.7.x then I used the hash
>> from the backport).
>>
>> 2. I verified if the cherry pick for each commit is clean. This was best
>> effort as some of the patches are in parts of the code that I am less
>> familiar with. This is recorded in the public spread sheet here:
>> https://docs.google.com/spreadsheets/d/1lfG2CYQ7W4q3ol
>> WpOCo6EBAey1WYC8hTRUemHvYPPzY/edit?usp=sharing
>>
>> I am going to need help from committers to get these backports committed.
>> If there are any committers that have some spare cycles, especially if you
>> were involved with the initial commit for one of these issues, please look
>> at the spreadsheet and volunteer to backport one of the issues.
>>
>> As always, please let me know if you have any questions or feel that I
>> have
>> missed something.
>>
>> Thank you!
>> Chris Trezzo
>>
>> On Mon, Aug 15, 2016 at 10:55 AM, Allen Wittenauer <
>> a...@effectivemachines.com
>> > wrote:
>>
>> >
>> > > On Aug 12, 2016, at 8:19 AM, Junping Du  wrote:
>> > >
>> > >  In this community, we are so aggressive to drop Java 7 support in
>> 3.0.x
>> > release. Here, why we are so conservative to keep releasing new bits to
>> > support Java 6?
>> >
>> > I don't view a group of people putting bug fixes into a micro
>> > release as particularly conservative.  If a group within the community
>> > wasn't interested in doing it, 2.6.5 wouldn't be happening.
>> >
>> > But let's put the releases into context, because I think it
>> tells
>> > a more interesting story.
>> >
>> > * hadoop 2.6.x = EOLed JREs (6,7)
>> > * hadoop 2.7 -> hadoop 2.x = transitional (7,8)
>> > * hadoop 3.x = JRE 8
>> > * hadoop 4.x = JRE 9
>> >
>> > There are groups of people still using JDK6 and they want bug
>> > fixes in a maintenance release.  Boom, there's 2.6.x.
>> >
>> > Hadoop 3.x has been pushed off for years for "reasons".  So we
>> > still have releases coming off of branch-2.  If 2.7 had been released as
>> > 3.x, this chart would look less weird. But it wasn't thus 2.x has this
>> > weird wart in the middle of that supports JDK7 and JDK8.  Given the
>> public
>> > policy and roadmaps of at least one major vendor at the time of this
>> > writing, we should expect to see JDK7 support for at least the next two
>> > years after 3.x appears. Bang, there's 2.x, where x is some large
>> number.
>> >
>> > Then there is the future.  People using JRE 8 want to use newer
>> > dependencies.  A reasonable request. Some of these dependency updates
>> won't
>> > work with JRE 7.   We can't do that in hadoop 2.x in any sort of
>> compatible
>> > way without breaking the universe. (Tons of JIRAs on this point.) This
>> > means we can only do it in 3.x (re: Hadoop Compatibility Guidelines).
>> > Kapow, there's 3.x
>> >
>> > The log4j community has stated that v1 won't work with JDK9. In
>> > turn, this means we'll need to upgrade to v2 at some point.  Upgrading
>> to
>> > v2 will break the log4j properties file (and maybe other things?).
>> Another
>> > incompatible change and it likely won't appear until Apache Hadoop v4
>> > unless someone takes the initiative to fix it before v3 hits store
>> > shelves.  This makes JDK9 the likely target for Apache Hadoop v4.
>> >
>> > Having major release cadences tied to JRE updates isn't
>> > necessarily a bad thing and definitely forces the community to a)
>> actually
>> > stop beating around the bush on majors and b) actually makes it
>> relatively
>> > easy to determine what the schedule looks like to some degree.
>> >
>> >
>> >
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>> >
>> >
>>
>
>


[jira] [Resolved] (YARN-5563) Add log messages for jobs in ACCEPTED state but not runnable.

2016-09-14 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu resolved YARN-5563.

Resolution: Duplicate

> Add log messages for jobs in ACCEPTED state but not runnable.
> -
>
> Key: YARN-5563
> URL: https://issues.apache.org/jira/browse/YARN-5563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: supportability
>
> Leaf queues maintain a list of runnable and non-runnable apps. FairScheduler 
> marks an app non-runnable for different reasons: exceeding the following 
> properties of the leaf queue:
> (1) queue max apps, 
> (2) user max apps, 
> (3) queue maxResources, 
> (4) maxAMShare. 
> It would be nice to log the reason an app isn't runnable. The first three are 
> easy to infer, but the last one (maxAMShare) is particularly hard. We are 
> going to log all of them and show the reason if any in WebUI application view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5654) Not be able to run SLS with FairScheduler

2016-09-14 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5654:


 Summary: Not be able to run SLS with FairScheduler
 Key: YARN-5654
 URL: https://issues.apache.org/jira/browse/YARN-5654
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan


With the config:
https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/configs/hadoop-conf-fs

And data:
https://github.com/leftnoteasy/yarn_application_synthesizer/tree/master/data/scheduler-load-test-data

Capacity Scheduler runs fine, but Fair Scheduler cannot be successfully run. It 
reports NPE from RMAppAttemptImpl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5619) Provide way to limit MRJob's stdout/stderr size

2016-09-14 Thread Aleksandr Balitsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Balitsky resolved YARN-5619.
--
Resolution: Duplicate

> Provide way to limit MRJob's stdout/stderr size
> ---
>
> Key: YARN-5619
> URL: https://issues.apache.org/jira/browse/YARN-5619
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Affects Versions: 2.7.0
>Reporter: Aleksandr Balitsky
>Priority: Minor
>
> We can run job with huge amount of stdout/stderr and causing undesired 
> consequence. There is already a Jira which is been open for while now:
> https://issues.apache.org/jira/browse/YARN-2231
> The possible solution is to redirect Stdout's and Stderr's output to log4j in 
> YarnChild.java main method via commands:
> System.setErr( new PrintStream( new LoggingOutputStream( , 
> Level.ERROR ), true));
> System.setOut( new PrintStream( new LoggingOutputStream( , 
> Level.INFO ), true));
> In this case System.out and System.err will be redirected to log4j logger 
> with appropriate appender that will direct output to stderr or stdout files 
> with needed size limitation. 
> Advantages of such solution:
> - it allows us to restrict file sizes during job execution.
> Disadvantages:
> - It will work only for MRs jobs.
> - logs are stored in memory and are flushed on disk only after job's 
> finishing (syslog works the same way) - we can loose logs if container will 
> be killed or failed.
> Is it appropriate solution for solving this problem, or is there something 
> better?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-09-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/

[Sep 13, 2016 9:53:24 AM] (rohithsharmaks) YARN-5631. Missing 
refreshClusterMaxPriority usage in rmadmin help
[Sep 13, 2016 2:41:27 PM] (jlowe) YARN-5630. NM fails to start after downgrade 
from 2.8 to 2.7.
[Sep 13, 2016 4:38:12 PM] (aengineer) HDFS-10599. DiskBalancer: Execute CLI via 
Shell. Contributed by Manoj
[Sep 13, 2016 6:02:36 PM] (wang) HDFS-10837. Standardize serializiation of 
WebHDFS DirectoryListing.
[Sep 13, 2016 6:12:52 PM] (jing9) HADOOP-13546. Override equals and hashCode of 
the default retry policy
[Sep 13, 2016 7:42:10 PM] (aengineer) HDFS-10562. DiskBalancer: update 
documentation on how to report issues
[Sep 13, 2016 7:54:14 PM] (lei) HDFS-10636. Modify ReplicaInfo to remove the 
assumption that replica
[Sep 14, 2016 2:14:31 AM] (aajisaka) HADOOP-13598. Add eol=lf for unix format 
files in .gitattributes.
[Sep 15, 2016 2:46:00 AM] (kai.zheng) HADOOP-13218. Migrate other Hadoop side 
tests to prepare for removing




-1 overall


The following subsystems voted -1:
asflicense mvnsite unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestEncryptionZones 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-checkstyle-root.txt
  [16M]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-mvnsite-root.txt
  [112K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [192K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/164/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-5653) testNonLabeledResourceRequestGetPreferrenceToNonLabeledNode fails intermittently

2016-09-14 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-5653:


 Summary: 
testNonLabeledResourceRequestGetPreferrenceToNonLabeledNode fails intermittently
 Key: YARN-5653
 URL: https://issues.apache.org/jira/browse/YARN-5653
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


Saw the following TestNodeLabelContainerAllocation failure in a recent 
precommit:
{noformat}
Running 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
Tests run: 19, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 113.791 sec 
<<< FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
testNonLabeledResourceRequestGetPreferrenceToNonLabeledNode(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation)
  Time elapsed: 0.266 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation.checkLaunchedContainerNumOnNode(TestNodeLabelContainerAllocation.java:562)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation.testNonLabeledResourceRequestGetPreferrenceToNonLabeledNode(TestNodeLabelContainerAllocation.java:842)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5652) testRefreshNodesResourceWithResourceReturnInRegistration fails intermittently

2016-09-14 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-5652:


 Summary: testRefreshNodesResourceWithResourceReturnInRegistration 
fails intermittently
 Key: YARN-5652
 URL: https://issues.apache.org/jira/browse/YARN-5652
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


Saw the following in a recent precommit:
{noformat}
Running org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
Tests run: 25, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 18.639 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
testRefreshNodesResourceWithResourceReturnInRegistration(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
  Time elapsed: 0.763 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<> but 
was:<>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testRefreshNodesResourceWithResourceReturnInRegistration(TestRMAdminService.java:286)
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5651) Changes to NMStateStore to persist reinitialization and rollback state

2016-09-14 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5651:
-

 Summary: Changes to NMStateStore to persist reinitialization and 
rollback state
 Key: YARN-5651
 URL: https://issues.apache.org/jira/browse/YARN-5651
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Arun Suresh






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5650) Add CLI endpoints for updating application timeouts

2016-09-14 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5650:
---

 Summary: Add CLI endpoints for updating application timeouts
 Key: YARN-5650
 URL: https://issues.apache.org/jira/browse/YARN-5650
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-5649) Add REST endpoints for updating application timeouts

2016-09-14 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5649:
---

 Summary: Add REST endpoints for updating application timeouts
 Key: YARN-5649
 URL: https://issues.apache.org/jira/browse/YARN-5649
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org