[jira] [Commented] (YARN-513) Verify all clients will wait for RM to restart
[ https://issues.apache.org/jira/browse/YARN-513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640150#comment-13640150 ] nemon lou commented on YARN-513: What about admin client? refreshQueues,refreshNodes,etc. These will be needed in HA. Verify all clients will wait for RM to restart -- Key: YARN-513 URL: https://issues.apache.org/jira/browse/YARN-513 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Bikas Saha Assignee: Xuan Gong When the RM is restarting, the NM, AM and Clients should wait for some time for the RM to come back up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-606) negative queue metrcis apps Failed
nemon lou created YARN-606: -- Summary: negative queue metrcis apps Failed Key: YARN-606 URL: https://issues.apache.org/jira/browse/YARN-606 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager Affects Versions: 2.0.3-alpha Reporter: nemon lou Priority: Minor Queue metrcis apps Failed can be negative in some cases(more than one attempt for an application can cause this). It's confusing if we use this metrics directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-606) negative queue metrcis apps Failed
[ https://issues.apache.org/jira/browse/YARN-606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640214#comment-13640214 ] nemon lou commented on YARN-606: The submitApp() method in QueueMetrcis.java cause negative ,it has this logic: public void submitApp(String user, int attemptId) { if (attemptId == 1) { appsSubmitted.incr(); } else { appsFailed.decr(); } ... } Which is introduced in by MAPREDUCE-3870. negative queue metrcis apps Failed - Key: YARN-606 URL: https://issues.apache.org/jira/browse/YARN-606 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager Affects Versions: 2.0.3-alpha Reporter: nemon lou Priority: Minor Queue metrcis apps Failed can be negative in some cases(more than one attempt for an application can cause this). It's confusing if we use this metrics directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-606) negative queue metrics apps Failed
[ https://issues.apache.org/jira/browse/YARN-606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nemon lou updated YARN-606: --- Summary: negative queue metrics apps Failed (was: negative queue metrcis apps Failed) negative queue metrics apps Failed - Key: YARN-606 URL: https://issues.apache.org/jira/browse/YARN-606 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager Affects Versions: 2.0.3-alpha Reporter: nemon lou Priority: Minor Queue metrcis apps Failed can be negative in some cases(more than one attempt for an application can cause this). It's confusing if we use this metrics directly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-561) Nodemanager should set some key information into the environment of every container that it launches.
[ https://issues.apache.org/jira/browse/YARN-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640329#comment-13640329 ] Hudson commented on YARN-561: - Integrated in Hadoop-Yarn-trunk #193 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/193/]) YARN-561. Modified NodeManager to set key information into the environment of every container that it launches. Contributed by Xuan Gong. MAPREDUCE-5175. Updated MR App to not set envs that will be set by NMs anyways after YARN-561. Contributed by Xuan Gong. (Revision 1471156) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1471156 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationContainerInitEvent.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/Container.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainersLauncher.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/DummyContainerManager.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutor.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java *
[jira] [Commented] (YARN-581) Test and verify that app delegation tokens are restored after RM restart
[ https://issues.apache.org/jira/browse/YARN-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640330#comment-13640330 ] Hudson commented on YARN-581: - Integrated in Hadoop-Yarn-trunk #193 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/193/]) YARN-581. Added a test to verify that app delegation tokens are restored after RM restart. Contributed by Jian He. (Revision 1471187) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1471187 Files : * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java Test and verify that app delegation tokens are restored after RM restart Key: YARN-581 URL: https://issues.apache.org/jira/browse/YARN-581 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Bikas Saha Assignee: Jian He Fix For: 2.0.5-beta Attachments: YARN-581.1.patch, YARN-581.2.patch The code already saves the delegation tokens in AppSubmissionContext. Upon restart the AppSubmissionContext is used to submit the application again and so restores the delegation tokens. This jira tracks testing and verifying this functionality in a secure setup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-581) Test and verify that app delegation tokens are restored after RM restart
[ https://issues.apache.org/jira/browse/YARN-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640415#comment-13640415 ] Hudson commented on YARN-581: - Integrated in Hadoop-Hdfs-trunk #1382 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1382/]) YARN-581. Added a test to verify that app delegation tokens are restored after RM restart. Contributed by Jian He. (Revision 1471187) Result = FAILURE vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1471187 Files : * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java Test and verify that app delegation tokens are restored after RM restart Key: YARN-581 URL: https://issues.apache.org/jira/browse/YARN-581 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Bikas Saha Assignee: Jian He Fix For: 2.0.5-beta Attachments: YARN-581.1.patch, YARN-581.2.patch The code already saves the delegation tokens in AppSubmissionContext. Upon restart the AppSubmissionContext is used to submit the application again and so restores the delegation tokens. This jira tracks testing and verifying this functionality in a secure setup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-561) Nodemanager should set some key information into the environment of every container that it launches.
[ https://issues.apache.org/jira/browse/YARN-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640414#comment-13640414 ] Hudson commented on YARN-561: - Integrated in Hadoop-Hdfs-trunk #1382 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1382/]) YARN-561. Modified NodeManager to set key information into the environment of every container that it launches. Contributed by Xuan Gong. MAPREDUCE-5175. Updated MR App to not set envs that will be set by NMs anyways after YARN-561. Contributed by Xuan Gong. (Revision 1471156) Result = FAILURE vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1471156 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/LocalContainerLauncher.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/MapReduceChildJVM.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/YarnChild.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/MRAppMaster.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-unmanaged-am-launcher/src/main/java/org/apache/hadoop/yarn/applications/unmanagedamlauncher/UnmanagedAMLauncher.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationContainerInitEvent.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/Container.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainersLauncher.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/dao/ContainerInfo.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/DummyContainerManager.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutor.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java *
[jira] [Commented] (YARN-605) Failing unit test in TestNMWebServices when using git for source control
[ https://issues.apache.org/jira/browse/YARN-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640418#comment-13640418 ] Thomas Graves commented on YARN-605: I'd like to understand what exactly is different before changing anything. Failing unit test in TestNMWebServices when using git for source control - Key: YARN-605 URL: https://issues.apache.org/jira/browse/YARN-605 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-605.1.patch Failed tests: testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-581) Test and verify that app delegation tokens are restored after RM restart
[ https://issues.apache.org/jira/browse/YARN-581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640465#comment-13640465 ] Hudson commented on YARN-581: - Integrated in Hadoop-Mapreduce-trunk #1409 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1409/]) YARN-581. Added a test to verify that app delegation tokens are restored after RM restart. Contributed by Jian He. (Revision 1471187) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1471187 Files : * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestRMRestart.java Test and verify that app delegation tokens are restored after RM restart Key: YARN-581 URL: https://issues.apache.org/jira/browse/YARN-581 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Bikas Saha Assignee: Jian He Fix For: 2.0.5-beta Attachments: YARN-581.1.patch, YARN-581.2.patch The code already saves the delegation tokens in AppSubmissionContext. Upon restart the AppSubmissionContext is used to submit the application again and so restores the delegation tokens. This jira tracks testing and verifying this functionality in a secure setup. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-605) Failing unit test in TestNMWebServices when using git for source control
[ https://issues.apache.org/jira/browse/YARN-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640495#comment-13640495 ] Hitesh Shah commented on YARN-605: -- {code} String a =3.0.0-SNAPSHOT from f6e4a3a01fd1e341b3750c5843e2588a26d0db31 (HEAD, origin/trunk, origin/HEAD, yarn605, trunk) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789; String b =3.0.0-SNAPSHOT from f6e4a3a01fd1e341b3750c5843e2588a26d0db31 (HEAD, origin/trunk, origin/HEAD, yarn605, trunk) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789; String c =3.0.0-SNAPSHOT from f6e4a3a01fd1e341b3750c5843e2588a26d0db31 HEAD, origin/trunk, origin/HEAD, yarn605, trunk by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789; String d =3.0.0-SNAPSHOT from f6e4a3a01fd1e341b3750c5843e2588a26d0db31 HEAD, origin/trunk, origin/HEAD, yarn605, trunk by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789; System.out.println(A equals B: + a.equals(b)); System.out.println(B equals A: + b.equals(a)); System.out.println(A matches B: + a.matches(b)); System.out.println(B matches A: + b.matches(a)); System.out.println(C equals D: + c.equals(d)); System.out.println(D equals C: + d.equals(c)); System.out.println(C matches D: + c.matches(d)); System.out.println(D matches C: + d.matches(c)); {code} The above results in: {code} A equals B: true B equals A: true A matches B: false B matches A: false C equals D: true D equals C: true C matches D: true D matches C: true {code} The string matches function fails to match due to the parenthesis in the strings. Failing unit test in TestNMWebServices when using git for source control - Key: YARN-605 URL: https://issues.apache.org/jira/browse/YARN-605 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-605.1.patch Failed tests: testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum
[jira] [Updated] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hitesh Shah updated YARN-577: - Attachment: YARN-577.combinedwithMR.patch Adding a combined patch with MR changes as there is a change in the BuilderUtils api that breaks MR. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-605) Failing unit test in TestNMWebServices when using git for source control
[ https://issues.apache.org/jira/browse/YARN-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640521#comment-13640521 ] Hitesh Shah commented on YARN-605: -- Using git 1.7.3.4. .gitconfig has: {code} [log] decorate = short {code} git log -n 1 gives: {code} commit cb52393b4bb7109a257f0e1a7e547e666f087308 (HEAD, origin/trunk, origin/HEAD, yarn577, trunk) Author: Robert Joseph Evans bo...@apache.org Date: Wed Apr 24 14:11:50 2013 + MAPREDUCE-5069. add concrete common implementations of CombineFileInputFormat (Sangjin Lee via bobby) git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1471424 13f79535-47bb-0310-9956-ffa450edef68 {code} The log decorate ends up displaying branch information with the commit info which breaks the match functionality. Failing unit test in TestNMWebServices when using git for source control - Key: YARN-605 URL: https://issues.apache.org/jira/browse/YARN-605 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-605.1.patch Failed tests: testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640559#comment-13640559 ] Hadoop QA commented on YARN-577: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12580290/YARN-577.combinedwithMR.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/816//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/816//console This message is automatically generated. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-513) Verify all clients will wait for RM to restart
[ https://issues.apache.org/jira/browse/YARN-513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640574#comment-13640574 ] Vinod Kumar Vavilapalli commented on YARN-513: -- Good catch Nemon! Yes, we should do this for RMAdminClient too. Verify all clients will wait for RM to restart -- Key: YARN-513 URL: https://issues.apache.org/jira/browse/YARN-513 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Bikas Saha Assignee: Xuan Gong When the RM is restarting, the NM, AM and Clients should wait for some time for the RM to come back up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-605) Failing unit test in TestNMWebServices when using git for source control
[ https://issues.apache.org/jira/browse/YARN-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640608#comment-13640608 ] Thomas Graves commented on YARN-605: thanks Hitesh for digging into that. I'm fine with changing it to equals. I haven't actually tried your patch yet but I would expect the TestHSWebServices to have a similar check. Failing unit test in TestNMWebServices when using git for source control - Key: YARN-605 URL: https://issues.apache.org/jira/browse/YARN-605 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-605.1.patch Failed tests: testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-595) Refactor fair scheduler to use common Resources
[ https://issues.apache.org/jira/browse/YARN-595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640612#comment-13640612 ] Tom White commented on YARN-595: +1 Refactor fair scheduler to use common Resources --- Key: YARN-595 URL: https://issues.apache.org/jira/browse/YARN-595 Project: Hadoop YARN Issue Type: Sub-task Components: scheduler Affects Versions: 2.0.3-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza Attachments: YARN-595-1.patch, YARN-595.patch, YARN-595.patch resourcemanager.fair and resourcemanager.resources have two copies of basically the same code for operations on Resource objects -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-422) Add AM-NM client library
[ https://issues.apache.org/jira/browse/YARN-422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640660#comment-13640660 ] Zhijie Shen commented on YARN-422: -- Yes, non-blocking API is a must, but shall we define AMNMClientImpl the simple wrapper around the RPC (but to all NMs), such that it contain blocking APIs? After that, we define another class, called AMNMClientAsync, to further wrap AMNMClientImpl and provide non-blocking APIs. Similar to what we did in AMRMClientImpl and AMRMClientAsync. How do you think? Add AM-NM client library Key: YARN-422 URL: https://issues.apache.org/jira/browse/YARN-422 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bikas Saha Assignee: Zhijie Shen Attachments: proposal_v1.pdf Create a simple wrapper over the AM-NM container protocol to provide hide the details of the protocol implementation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-422) Add AM-NM client library
[ https://issues.apache.org/jira/browse/YARN-422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640717#comment-13640717 ] Vinod Kumar Vavilapalli commented on YARN-422: -- bq. Yes, non-blocking API is a must, but shall we define AMNMClientImpl the simple wrapper around the RPC (but to all NMs), such that it contain blocking APIs? Like I mentioned, I don't see any value in it. Let's skip it for now. I do feel we'll never need one, but we can pursue that later anyways. Add AM-NM client library Key: YARN-422 URL: https://issues.apache.org/jira/browse/YARN-422 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bikas Saha Assignee: Zhijie Shen Attachments: proposal_v1.pdf Create a simple wrapper over the AM-NM container protocol to provide hide the details of the protocol implementation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640736#comment-13640736 ] Vinod Kumar Vavilapalli commented on YARN-577: -- Append a % character to the progress printed via ApplicationCLI.java ? I wish the web-UI also used ApplicationReport instead of digging into RMApp, that way we'd sure that whatever gets exposed on web-ui is also available for the cmd/RPC clients. Do you wish to investigate it here itself? Can push it out if it turns out be a lot of effort - fine either ways. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-592) Container logs lost for the application when NM gets restarted
[ https://issues.apache.org/jira/browse/YARN-592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-592: - Target Version/s: 2.0.5-beta Setting target version to be 2.0.05-beta. Devaraj, I see you assigned it to yourselves, I suppose you have cycles to get this done in time for 2.0.5-beta. Let me know otherwise, thanks. Container logs lost for the application when NM gets restarted -- Key: YARN-592 URL: https://issues.apache.org/jira/browse/YARN-592 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 2.0.1-alpha, 2.0.3-alpha Reporter: Devaraj K Assignee: Devaraj K Priority: Critical While running a big job if the NM goes down due to some reason and comes back, it will do the log aggregation for the newly launched containers and deletes all the containers for the application. This case we don't get the container logs from HDFS or local for the containers which are launched before restart and completed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-363) yarn proxyserver fails to find webapps/proxy directory on startup
[ https://issues.apache.org/jira/browse/YARN-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640821#comment-13640821 ] Vinod Kumar Vavilapalli commented on YARN-363: -- Kenji, Please assign tickets to yourselves if you are taking them up. You can do that by clicking on Assign To Me button in the top menu-bar. Also, if you think the patch is ready, submit it by clicking on Submit Patch. That way you can catch reviewers' attention. yarn proxyserver fails to find webapps/proxy directory on startup - Key: YARN-363 URL: https://issues.apache.org/jira/browse/YARN-363 Project: Hadoop YARN Issue Type: Bug Affects Versions: 0.23.6 Reporter: Jason Lowe Attachments: YARN-363.patch Starting up the proxy server fails with this error: {noformat} 2013-01-29 17:37:41,357 FATAL webproxy.WebAppProxy (WebAppProxy.java:start(99)) - Could not start proxy web server java.io.FileNotFoundException: webapps/proxy not found in CLASSPATH at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:533) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:225) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:164) at org.apache.hadoop.yarn.server.webproxy.WebAppProxy.start(WebAppProxy.java:90) at org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.main(WebAppProxyServer.java:94) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-363) yarn proxyserver fails to find webapps/proxy directory on startup
[ https://issues.apache.org/jira/browse/YARN-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640822#comment-13640822 ] Vinod Kumar Vavilapalli commented on YARN-363: -- [~jlowe], I am curious why we didn't see this in previous versions of branch-0.23, for e.g. on your installs. yarn proxyserver fails to find webapps/proxy directory on startup - Key: YARN-363 URL: https://issues.apache.org/jira/browse/YARN-363 Project: Hadoop YARN Issue Type: Bug Affects Versions: 0.23.6 Reporter: Jason Lowe Attachments: YARN-363.patch Starting up the proxy server fails with this error: {noformat} 2013-01-29 17:37:41,357 FATAL webproxy.WebAppProxy (WebAppProxy.java:start(99)) - Could not start proxy web server java.io.FileNotFoundException: webapps/proxy not found in CLASSPATH at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:533) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:225) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:164) at org.apache.hadoop.yarn.server.webproxy.WebAppProxy.start(WebAppProxy.java:90) at org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.main(WebAppProxyServer.java:94) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-605) Failing unit test in TestNMWebServices when using git for source control
[ https://issues.apache.org/jira/browse/YARN-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640828#comment-13640828 ] Vinod Kumar Vavilapalli commented on YARN-605: -- The changes look good to me too. +1. Ran all the tests, passed. Checking it in. Failing unit test in TestNMWebServices when using git for source control - Key: YARN-605 URL: https://issues.apache.org/jira/browse/YARN-605 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-605.1.patch Failed tests: testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-605) Failing unit test in TestNMWebServices when using git for source control
[ https://issues.apache.org/jira/browse/YARN-605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640833#comment-13640833 ] Hitesh Shah commented on YARN-605: -- [~tgraves] I hadn't gotten around to playing around with MR tests yet. Have file a separate jira for TestHSWebServices. Will run the full MR tests and modify the jira/patch as needed. Failing unit test in TestNMWebServices when using git for source control - Key: YARN-605 URL: https://issues.apache.org/jira/browse/YARN-605 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-605.1.patch Failed tests: testNode(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfo(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoSlash(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testNodeInfoDefault(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 testSingleNodesXML(org.apache.hadoop.yarn.server.nodemanager.webapp.TestNMWebServices): hadoopBuildVersion doesn't match, got: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 expected: 3.0.0-SNAPSHOT from fddcdcfb3cfe7dcc4f77c1ac953dd2cc0a890c62 (HEAD, origin/trunk, origin/HEAD, mrx-track) by Hitesh source checksum f89f5c9b9c9d44cf3be5c2686f2d789 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-363) yarn proxyserver fails to find webapps/proxy directory on startup
[ https://issues.apache.org/jira/browse/YARN-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640835#comment-13640835 ] Jason Lowe commented on YARN-363: - By default it runs as part of the ResourceManager process, and our installs don't change that behavior. As such we don't normally start it up as a separate process. I only stumbled across this when verifying YARN-354. yarn proxyserver fails to find webapps/proxy directory on startup - Key: YARN-363 URL: https://issues.apache.org/jira/browse/YARN-363 Project: Hadoop YARN Issue Type: Bug Affects Versions: 0.23.6 Reporter: Jason Lowe Attachments: YARN-363.patch Starting up the proxy server fails with this error: {noformat} 2013-01-29 17:37:41,357 FATAL webproxy.WebAppProxy (WebAppProxy.java:start(99)) - Could not start proxy web server java.io.FileNotFoundException: webapps/proxy not found in CLASSPATH at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:533) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:225) at org.apache.hadoop.http.HttpServer.init(HttpServer.java:164) at org.apache.hadoop.yarn.server.webproxy.WebAppProxy.start(WebAppProxy.java:90) at org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68) at org.apache.hadoop.yarn.server.webproxy.WebAppProxyServer.main(WebAppProxyServer.java:94) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hitesh Shah updated YARN-577: - Attachment: YARN-577.2.patch ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640880#comment-13640880 ] Hitesh Shah commented on YARN-577: -- @Vinod, added the % in the output for now. Will dig into the web-ui aspects but would prefer to keep that in a separate jira/patch. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640883#comment-13640883 ] Hitesh Shah commented on YARN-577: -- Forgot to mention that there is the UI, webservices and the command lines which potentially could also share the same bits of information ( not to mention anything exposed via JMX beans) ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-506) Move to common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute
[ https://issues.apache.org/jira/browse/YARN-506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Mitic updated YARN-506: Attachment: YARN-506.commonfileutils.2.patch Rebasing the patch. Move to common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute Key: YARN-506 URL: https://issues.apache.org/jira/browse/YARN-506 Project: Hadoop YARN Issue Type: Bug Affects Versions: 3.0.0 Reporter: Ivan Mitic Assignee: Ivan Mitic Attachments: YARN-506.commonfileutils.2.patch, YARN-506.commonfileutils.patch Move to common utils described in HADOOP-9413 that work well cross-platform. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-126) yarn rmadmin help message contains reference to hadoop cli and JT
[ https://issues.apache.org/jira/browse/YARN-126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rémy SAISSY updated YARN-126: - Attachment: (was: YARN-126.patch) yarn rmadmin help message contains reference to hadoop cli and JT - Key: YARN-126 URL: https://issues.apache.org/jira/browse/YARN-126 Project: Hadoop YARN Issue Type: Bug Components: client Affects Versions: 2.0.3-alpha Reporter: Thomas Graves Assignee: Rémy SAISSY Labels: usability Attachments: YARN-126.patch has option to specify a job tracker and the last line for general command line syntax had bin/hadoop command [genericOptions] [commandOptions] ran yarn rmadmin to get usage: RMAdmin Usage: java RMAdmin [-refreshQueues] [-refreshNodes] [-refreshUserToGroupsMappings] [-refreshSuperUserGroupsConfiguration] [-refreshAdminAcls] [-refreshServiceAcl] [-help [cmd]] Generic options supported are -conf configuration file specify an application configuration file -D property=valueuse value for given property -fs local|namenode:port specify a namenode -jt local|jobtracker:portspecify a job tracker -files comma separated list of filesspecify comma separated files to be copied to the map reduce cluster -libjars comma separated list of jarsspecify comma separated jar files to include in the classpath. -archives comma separated list of archivesspecify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-562) NM should reject containers allocated by previous RM
[ https://issues.apache.org/jira/browse/YARN-562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated YARN-562: - Attachment: YARN-562.10.patch New patch addresses above comments, and add two new exception types NM should reject containers allocated by previous RM Key: YARN-562 URL: https://issues.apache.org/jira/browse/YARN-562 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Jian He Assignee: Jian He Attachments: YARN-562.10.patch, YARN-562.1.patch, YARN-562.2.patch, YARN-562.3.patch, YARN-562.4.patch, YARN-562.5.patch, YARN-562.6.patch, YARN-562.7.patch, YARN-562.8.patch, YARN-562.9.patch Its possible that after RM shutdown, before AM goes down,AM still call startContainer on NM with containers allocated by previous RM. When RM comes back, NM doesn't know whether this container launch request comes from previous RM or the current RM. we should reject containers allocated by previous RM -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640962#comment-13640962 ] Hadoop QA commented on YARN-577: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12580359/YARN-577.combined.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/817//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/817//console This message is automatically generated. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-562) NM should reject containers allocated by previous RM
[ https://issues.apache.org/jira/browse/YARN-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640997#comment-13640997 ] Hadoop QA commented on YARN-562: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12580381/YARN-562.10.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-YARN-Build/818//testReport/ Console output: https://builds.apache.org/job/PreCommit-YARN-Build/818//console This message is automatically generated. NM should reject containers allocated by previous RM Key: YARN-562 URL: https://issues.apache.org/jira/browse/YARN-562 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Jian He Assignee: Jian He Attachments: YARN-562.10.patch, YARN-562.1.patch, YARN-562.2.patch, YARN-562.3.patch, YARN-562.4.patch, YARN-562.5.patch, YARN-562.6.patch, YARN-562.7.patch, YARN-562.8.patch, YARN-562.9.patch Its possible that after RM shutdown, before AM goes down,AM still call startContainer on NM with containers allocated by previous RM. When RM comes back, NM doesn't know whether this container launch request comes from previous RM or the current RM. we should reject containers allocated by previous RM -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-607) Split up TestFairScheduler
Sandy Ryza created YARN-607: --- Summary: Split up TestFairScheduler Key: YARN-607 URL: https://issues.apache.org/jira/browse/YARN-607 Project: Hadoop YARN Issue Type: Improvement Components: scheduler Affects Versions: 2.0.4-alpha Reporter: Sandy Ryza Assignee: Sandy Ryza TestFairScheduler is 1,500 lines and bound to grow far beyond this with the new features like multi-resource scheduling that are to be added. It would make sense to factor out a set of common test utils and then split it into a few different classes that test different aspects of the fair scheduler. Here's a possible breakdown: TestFairSchedulerAllocations TestFairSchedulerPreemption TestFairSchedulerConfiguration TestFairSchedulerHierarchicalQueues -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641251#comment-13641251 ] Siddharth Seth commented on YARN-579: - +1. The patch looks good. Splitting it into two (MAPREDUCE-5181 for the mr part). The same needs to be done for the CLIENT secret. Opening a jira for this. Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated YARN-579: Attachment: YARN-579-20130422.1_YARNChanges.txt Same patch, with the MR changes removed. Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641264#comment-13641264 ] Hadoop QA commented on YARN-579: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12580439/YARN-579-20130422.1_YARNChanges.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:red}-1 javac{color:red}. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-YARN-Build/819//console This message is automatically generated. Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641273#comment-13641273 ] Jian He commented on YARN-579: -- The AppToken is stored in ContainerLaunchContext,not in Container, so appToken should be set before ApplicationSubmissionContext is saved to help RM-restart Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-562) NM should reject containers allocated by previous RM
[ https://issues.apache.org/jira/browse/YARN-562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641308#comment-13641308 ] Bikas Saha commented on YARN-562: - Shouldnt the new exception be inheriting from YarnException, the common base class? I actually like NMNotConnectedWithRMException because NotYetReady could be due to various other reasons. No strong opinion. Is there an existing InvalidContainerException for cases when ContainerToken is invalid? How about InvalidContainerException as a name. If the only thing the client can do is get a new container from the RM then there may not be any point in differentiating the reasons. If we really want to keep RM in the name then maybe InvalidContainerFromUnknownRM. Previous may not be correct. I think the invalidation need to be done before sending the event because technically this thread could be suspended immediately after sending the event. So the handler thread could run before the invalidation happens. {code} dispatcher.getEventHandler().handle( new NodeManagerEvent(NodeManagerEventType.RESYNC)); + // Invalidate the RMIdentifier while resync + setRMIdentifier(ResourceManagerConstants.RM_INVALID_IDENTIFIER); break; {code} Reads weird that container manager is notifying itself. {code} + +LOG.info(Notifying ContainerManager to block new container-requests as + + NodeManager is still starting.); +this.setBlockNewContainerRequests(true); {code} Would be good to continue looping until notified that the containermanager is no longer blocked. {code} +try { // HERE set FLAG to stop thread + launchContainersThread.join(); + super.setBlockNewContainerRequests(blockNewContainerRequests); +try { // HERE check FLAG to stop thread + while (numContainers++ 10) { {code} NM should reject containers allocated by previous RM Key: YARN-562 URL: https://issues.apache.org/jira/browse/YARN-562 Project: Hadoop YARN Issue Type: Sub-task Components: resourcemanager Reporter: Jian He Assignee: Jian He Attachments: YARN-562.10.patch, YARN-562.1.patch, YARN-562.2.patch, YARN-562.3.patch, YARN-562.4.patch, YARN-562.5.patch, YARN-562.6.patch, YARN-562.7.patch, YARN-562.8.patch, YARN-562.9.patch Its possible that after RM shutdown, before AM goes down,AM still call startContainer on NM with containers allocated by previous RM. When RM comes back, NM doesn't know whether this container launch request comes from previous RM or the current RM. we should reject containers allocated by previous RM -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641314#comment-13641314 ] Vinod Kumar Vavilapalli commented on YARN-577: -- +1, this looks good, checking it in. Can you file a ticket about using ApplicationReport everywhere? Tx. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641315#comment-13641315 ] Bikas Saha commented on YARN-579: - appToken is attempt specific. So it needs to be store per app attempt. So it needs to be generated when app attempt is assigned a container and saved in appAttemptStateData. Basically the code needs to be moved from AMLauncher to the rmappattempimpl. AMLauncher will simply use the token from the attempt. Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641322#comment-13641322 ] Siddharth Seth commented on YARN-579: - bq. The AppToken is stored in ContainerLaunchContext,not in Container, so appToken should be set before ApplicationSubmissionContext is saved to help RM-restart bq. appToken is attempt specific. So it needs to be store per app attempt. So it needs to be generated when app attempt is assigned a container and saved in appAttemptStateData. Basically the code needs to be moved from AMLauncher to the rmappattempimpl. AMLauncher will simply use the token from the attempt. I believe this is being addressed in YARN-582. Committing this patch. The latest test-patch can be ignored since the MR bit is necessary for trunk to compile. Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641326#comment-13641326 ] Vinod Kumar Vavilapalli commented on YARN-579: -- Jian/Bikas, shoo, get off my ticket and move onto YARN-582.. ;) Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-579) Make ApplicationToken part of Container's token list to help RM-restart
[ https://issues.apache.org/jira/browse/YARN-579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641334#comment-13641334 ] Hudson commented on YARN-579: - Integrated in Hadoop-trunk-Commit #3660 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3660/]) YARN-579. Stop setting the Application Token in the AppMaster env, in favour of the copy present in the container token field. Contributed by Vinod Kumar Vavilapalli. (Revision 1471814) Result = SUCCESS sseth : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1471814 Files : * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/AMRMClientImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/amlauncher/AMLauncher.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAMAuthorization.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/security/TestApplicationTokens.java Make ApplicationToken part of Container's token list to help RM-restart --- Key: YARN-579 URL: https://issues.apache.org/jira/browse/YARN-579 Project: Hadoop YARN Issue Type: Sub-task Affects Versions: 2.0.4-alpha Reporter: Vinod Kumar Vavilapalli Assignee: Vinod Kumar Vavilapalli Fix For: 2.0.5-beta Attachments: YARN-579-20130422.1.txt, YARN-579-20130422.1_YARNChanges.txt Container is already persisted for helping RM restart. Instead of explicitly setting ApplicationToken in AM's env, if we change it to be in Container, we can avoid env and can also help restart. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641372#comment-13641372 ] Hitesh Shah commented on YARN-577: -- File YARN-608. ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (YARN-608) Consolidate use of ApplicationReport wherever overall application info is needed
Hitesh Shah created YARN-608: Summary: Consolidate use of ApplicationReport wherever overall application info is needed Key: YARN-608 URL: https://issues.apache.org/jira/browse/YARN-608 Project: Hadoop YARN Issue Type: Bug Reporter: Hitesh Shah RM UI, RM webservices, YARN CLI all use different approaches at providing back information. Instead of displaying everything via a common ApplicationReport object, each layer uses RMApp independently and potentially could end up displaying different subsets of information. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-386) [Umbrella] YARN API Changes
[ https://issues.apache.org/jira/browse/YARN-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-386: - Target Version/s: 2.0.5-beta [Umbrella] YARN API Changes --- Key: YARN-386 URL: https://issues.apache.org/jira/browse/YARN-386 Project: Hadoop YARN Issue Type: Bug Reporter: Vinod Kumar Vavilapalli This is the umbrella ticket to capture any and every API cleanup that we wish to do before YARN can be deemed beta/stable. Doing this API cleanup now and ASAP will help us escape the pain of supporting bad APIs in beta/stable releases. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-128) RM Restart
[ https://issues.apache.org/jira/browse/YARN-128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated YARN-128: - Target Version/s: 2.0.5-beta RM Restart --- Key: YARN-128 URL: https://issues.apache.org/jira/browse/YARN-128 Project: Hadoop YARN Issue Type: Bug Components: resourcemanager Affects Versions: 2.0.0-alpha Reporter: Arun C Murthy Assignee: Bikas Saha Attachments: MR-4343.1.patch, restart-12-11-zkstore.patch, restart-fs-store-11-17.patch, restart-zk-store-11-17.patch, RM-recovery-initial-thoughts.txt, RMRestartPhase1.pdf, YARN-128.full-code.3.patch, YARN-128.full-code-4.patch, YARN-128.full-code.5.patch, YARN-128.new-code-added.3.patch, YARN-128.new-code-added-4.patch, YARN-128.old-code-removed.3.patch, YARN-128.old-code-removed.4.patch, YARN-128.patch We should resurrect 'RM Restart' which we disabled sometime during the RM refactor. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (YARN-422) Add AM-NM client library
[ https://issues.apache.org/jira/browse/YARN-422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhijie Shen updated YARN-422: - Attachment: AMNMClient_Defination.txt I've drafted a footprint of AMNMClient library, without documentation, tests and etc. While I'm going on towards a complete patch, please have a look at the client's definition, and share you ideas. I define AMNMClient by referring AMRMClient. In general, there're the following parts in the library. 1. AMNMClient defines three basic APIs 2. NMCommunicator is a wrapper of the communications of only one container, defined in ContainerManager. It is also the inner class of AMNMClientImpl. 2. AMNMClientImpl implements the APIs. It maintain one-to-many relationship with all the containers that are to be started. It contains a collection of NMCommunicator. 3. AMNMClienAsync is the ultimate class that AM wants to use. It implements the three APIs in the non-blocking way. Internally, there's an event dispatcher, which starts when AMNMClienAsync starts. Calling the three APIs are just scheduling an event on the dispatcher. The dispatcher will deliver the event to an idle thread in the thread pool, where AMNMClientImpl is called do the real work. This part refers the design of ContainerLaucherImpl. In addition, as the execution is asynchronous, an Callback interface is exposed to AM. Add AM-NM client library Key: YARN-422 URL: https://issues.apache.org/jira/browse/YARN-422 Project: Hadoop YARN Issue Type: Sub-task Reporter: Bikas Saha Assignee: Zhijie Shen Attachments: AMNMClient_Defination.txt, proposal_v1.pdf Create a simple wrapper over the AM-NM container protocol to provide hide the details of the protocol implementation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (YARN-577) ApplicationReport does not provide progress value of application
[ https://issues.apache.org/jira/browse/YARN-577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13641465#comment-13641465 ] Hudson commented on YARN-577: - Integrated in Hadoop-trunk-Commit #3662 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3662/]) YARN-577. Add application-progress also to ApplicationReport. Contributed by Hitesh Shah. MAPREDUCE-5178. Update MR App to set progress in ApplicationReport after YARN-577. Contributed by Hitesh Shah. (Revision 1475636) Result = SUCCESS vinodkv : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1475636 Files : * /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/NotRunningJob.java * /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java * /hadoop/common/trunk/hadoop-yarn-project/CHANGES.txt * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationReport.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationReportPBImpl.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestYarnCLI.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/BuilderUtils.java * /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java ApplicationReport does not provide progress value of application Key: YARN-577 URL: https://issues.apache.org/jira/browse/YARN-577 Project: Hadoop YARN Issue Type: Sub-task Reporter: Hitesh Shah Assignee: Hitesh Shah Fix For: 2.0.5-beta Attachments: YARN-577.1.patch, YARN-577.2.patch, YARN-577.combined.2.patch, YARN-577.combinedwithMR.patch An application sends its progress % to the RM via AllocateRequest. This should be able to be retrieved by a client via the ApplicationReport. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira