[jira] [Updated] (YARN-796) Allow for (admin) labels on nodes and resource-requests

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-796:

Attachment: YARN-796.node-label.consolidate.11.patch

Attached ver.11 fixed javac warnings, findbug warnings and test failures.

> Allow for (admin) labels on nodes and resource-requests
> ---
>
> Key: YARN-796
> URL: https://issues.apache.org/jira/browse/YARN-796
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.4.1
>Reporter: Arun C Murthy
>Assignee: Wangda Tan
> Attachments: LabelBasedScheduling.pdf, 
> Node-labels-Requirements-Design-doc-V1.pdf, 
> Node-labels-Requirements-Design-doc-V2.pdf, YARN-796-Diagram.pdf, 
> YARN-796.node-label.consolidate.1.patch, 
> YARN-796.node-label.consolidate.10.patch, 
> YARN-796.node-label.consolidate.11.patch, 
> YARN-796.node-label.consolidate.2.patch, 
> YARN-796.node-label.consolidate.3.patch, 
> YARN-796.node-label.consolidate.4.patch, 
> YARN-796.node-label.consolidate.5.patch, 
> YARN-796.node-label.consolidate.6.patch, 
> YARN-796.node-label.consolidate.7.patch, 
> YARN-796.node-label.consolidate.8.patch, YARN-796.node-label.demo.patch.1, 
> YARN-796.patch, YARN-796.patch4
>
>
> It will be useful for admins to specify labels for nodes. Examples of labels 
> are OS, processor architecture etc.
> We should expose these labels and allow applications to specify labels on 
> resource-requests.
> Obviously we need to support admin operations on adding/removing node labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2571) RM to support YARN registry

2014-10-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-2571:
-
Attachment: YARN-2571-001.patch

This is the patch of everything under {{hadoop-yarn-server}} to integrate the 
registry with the RM. The RM takes on the tasks of
# creating user nodes with their access permissions (and the system accounts)
# recursively purging records whose persistence is tied to container, 
app-attempt or app when they terminate



> RM to support YARN registry 
> 
>
> Key: YARN-2571
> URL: https://issues.apache.org/jira/browse/YARN-2571
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-2571-001.patch
>
>
> The RM needs to (optionally) integrate with the YARN registry:
> # startup: create the /services and /users paths with system ACLs (yarn, hdfs 
> principals)
> # app-launch: create the user directory /users/$username with the relevant 
> permissions (CRD) for them to create subnodes.
> # attempt, container, app completion: remove service records with the 
> matching persistence and ID



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (YARN-2646) distributed shell & tests to use registry

2014-10-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned YARN-2646:


Assignee: Steve Loughran

> distributed shell & tests to use registry
> -
>
> Key: YARN-2646
> URL: https://issues.apache.org/jira/browse/YARN-2646
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, resourcemanager
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: YARN-2646-001.patch
>
>
> for testing and for an example, the Distributed Shell should create a record 
> for itself in the service registry ... the tests can look for this. This will 
> act as a test for the RM integration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2646) distributed shell & tests to use registry

2014-10-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-2646:
-
Attachment: YARN-2646-001.patch

This is the part of the YARn-913 patch which modified distributed shell to 
(optionally) register itself, and tests to verify that it does this, and that 
app-attempt-id persistent entries are purged when the app finishes

> distributed shell & tests to use registry
> -
>
> Key: YARN-2646
> URL: https://issues.apache.org/jira/browse/YARN-2646
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, resourcemanager
>Reporter: Steve Loughran
> Attachments: YARN-2646-001.patch
>
>
> for testing and for an example, the Distributed Shell should create a record 
> for itself in the service registry ... the tests can look for this. This will 
> act as a test for the RM integration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-10-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159832#comment-14159832
 ] 

Tsuyoshi OZAWA commented on YARN-1879:
--

Sounds good. Then we can simply remove retry-cache. I'm updating to remove 
retry-cache in next patch. [~kkambatl], please let me know if you have opinions 
about the design change. 

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
> YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
> YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
> YARN-1879.21.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
> YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-10-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159831#comment-14159831
 ] 

Jian He commented on YARN-1879:
---

bq.  How about finishApplicationMaster? 
ApplicationMasterService#unregisterAttempt is invoked in 
RMAppAttempt#BaseFinalTransition.  And the BaseFinalTransition will be invoked 
only after the am container process exists and NM reports the finished AM 
container to  RM. This means, if AM is still calling finishApplicationMaster, 
the BaseFinalTransition will never be invoked and the responseMap will never be 
null for that am.

seems allocate also doesn't need retry-cache, as it internally already 
implements retry-cache like mechanism to return previous response for duplicate 
request.

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
> YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
> YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
> YARN-1879.21.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
> YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2646) distributed shell & tests to use registry

2014-10-05 Thread Steve Loughran (JIRA)
Steve Loughran created YARN-2646:


 Summary: distributed shell & tests to use registry
 Key: YARN-2646
 URL: https://issues.apache.org/jira/browse/YARN-2646
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Steve Loughran


for testing and for an example, the Distributed Shell should create a record 
for itself in the service registry ... the tests can look for this. This will 
act as a test for the RM integration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-913) Add a way to register long-lived services in a YARN cluster

2014-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159805#comment-14159805
 ] 

Hadoop QA commented on YARN-913:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673044/YARN-913-018.patch
  against trunk revision 16333b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 37 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1268 javac 
compiler warnings (more than the trunk's current 1267 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5271//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5271//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-registry.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5271//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5271//console

This message is automatically generated.

> Add a way to register long-lived services in a YARN cluster
> ---
>
> Key: YARN-913
> URL: https://issues.apache.org/jira/browse/YARN-913
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, resourcemanager
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 2014-09-03_Proposed_YARN_Service_Registry.pdf, 
> 2014-09-08_YARN_Service_Registry.pdf, RegistrationServiceDetails.txt, 
> YARN-913-001.patch, YARN-913-002.patch, YARN-913-003.patch, 
> YARN-913-003.patch, YARN-913-004.patch, YARN-913-006.patch, 
> YARN-913-007.patch, YARN-913-008.patch, YARN-913-009.patch, 
> YARN-913-010.patch, YARN-913-011.patch, YARN-913-012.patch, 
> YARN-913-013.patch, YARN-913-014.patch, YARN-913-015.patch, 
> YARN-913-016.patch, YARN-913-017.patch, YARN-913-018.patch, yarnregistry.pdf, 
> yarnregistry.pdf, yarnregistry.tla
>
>
> In a YARN cluster you can't predict where services will come up -or on what 
> ports. The services need to work those things out as they come up and then 
> publish them somewhere.
> Applications need to be able to find the service instance they are to bond to 
> -and not any others in the cluster.
> Some kind of service registry -in the RM, in ZK, could do this. If the RM 
> held the write access to the ZK nodes, it would be more secure than having 
> apps register with ZK themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-10-05 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159806#comment-14159806
 ] 

Tsuyoshi OZAWA commented on YARN-1879:
--

[~jianhe], [~xgong], thanks for your suggestion. I agree with your suggestion 
about {{registerApplicationMaster}}. How about {{finishApplicationMaster}}? We 
cannot distinguish the failure of RPC from the success of RPC after the change. 
What do you think?

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
> YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
> YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
> YARN-1879.21.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
> YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Issues when I run hadoop-mapreduce-examples-2.5.1.jar

2014-10-05 Thread Mark Laney
It appears to me that  the "input" directory may not have been created.
The "hadoop jar  ..." command is expecting that it exists.  In
pseudo-distributed mode, check that you've run the command from that
website http://hadoop.apache.org/docs/current/hadoop-project-
dist/hadoop-common/SingleCluster.html

  $ bin/hdfs dfs -mkdir /user/
and replace  with huan

Mark

On Sat, Oct 4, 2014 at 4:18 AM, xiaopeng <1093218...@qq.com> wrote:

> Hi All,
>
> I install hadoop 2.5.1 on my computer as a single node cluster in
> Pseudo-Distributed mode.
>
> I followed the instructions in the web page '_http://hadoop.apache.org/
> docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html_'.
>
> It seems all right until I run mapreduce example below:
>
> huan@huan-ThinkPad-T410:~$ hadoop jar ~/hadoop-2.5.1/share/hadoop/
> mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output
> 'dfs[a-z.]+'
> 14/10/02 21:43:42 INFO client.RMProxy: Connecting to ResourceManager at /
> 0.0.0.0:8032
> 14/10/02 21:43:43 WARN mapreduce.JobSubmitter: No job jar file set. User
> classes may not be found. See Job or Job#setJar(String).
> 14/10/02 21:43:43 INFO input.FileInputFormat: Total input paths to process
> : 30
> 14/10/02 21:43:44 INFO mapreduce.JobSubmitter: number of splits:30
> 14/10/02 21:43:44 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1412256375065_0001
> 14/10/02 21:43:45 INFO mapred.YARNRunner: Job jar is not present. Not
> adding any jar to the list of resources.
> 14/10/02 21:43:46 INFO impl.YarnClientImpl: Submitted application
> application_1412256375065_0001
> 14/10/02 21:43:46 INFO mapreduce.Job: The url to track the job:
> http://huan-ThinkPad-T410:8088/proxy/application_1412256375065_0001/
> 14/10/02 21:43:46 INFO mapreduce.Job: Running job: job_1412256375065_0001
> 14/10/02 21:44:34 INFO mapreduce.Job: Job job_1412256375065_0001 running
> in uber mode : false
>
> So please give me a guide of the problem? Thanks a lot.
>
> My configuration as below:
>
> yarn-site.xml:
> 
> 
> 
> yarn.nodemanager.aux-services
> mapreduce_shuffle
> 
> 
>
> mapred-site.xml:
> 
> 
> mapreduce.framework.name
> yarn
> 
> 
>
> hdfs-site.xml:
> 
> 
> dfs.replication
> 1
> 
> 
>
> core-site.xml:
> 
> 
> fs.defaultFS
> hdfs://localhost:9000
> 
> 
>



-- 
Mark Laney
Hortonworks
Technical Instructor
c: 720.308.8027
"Half of the world's data will be processed by Apache Hadoop within five
years"

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Resolved] (YARN-2550) TestAMRestart fails intermittently

2014-10-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved YARN-2550.
--
Resolution: Duplicate

Yes. Resolve it as duplicated. Let's move the discussion to YARN-2483.

> TestAMRestart fails intermittently
> --
>
> Key: YARN-2550
> URL: https://issues.apache.org/jira/browse/YARN-2550
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith
>
> testShouldNotCountFailureToMaxAttemptRetry(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
>   Time elapsed: 50.64 sec  <<< FAILURE!
> java.lang.AssertionError: AppAttempt state is not correct (timedout) 
> expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:84)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:417)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAM(MockRM.java:582)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAndRegisterAM(MockRM.java:589)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForNewAMToLaunchAndRegister(MockRM.java:182)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testShouldNotCountFailureToMaxAttemptRetry(TestAMRestart.java:402)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2645) TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in trunk

2014-10-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159779#comment-14159779
 ] 

Junping Du commented on YARN-2645:
--

This sounds like a duplicate of YARN-2550 and YARN-2483. If nobody against, 
will resolve it as duplicated.

> TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in trunk
> ---
>
> Key: YARN-2645
> URL: https://issues.apache.org/jira/browse/YARN-2645
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Yarn-trunk/702/ :
> {code}
> testShouldNotCountFailureToMaxAttemptRetry(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
>   Time elapsed: 46.182 sec  <<< FAILURE!
> java.lang.AssertionError: AppAttempt state is not correct (timedout) 
> expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:84)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:452)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAM(MockRM.java:617)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAndRegisterAM(MockRM.java:624)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForNewAMToLaunchAndRegister(MockRM.java:183)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testShouldNotCountFailureToMaxAttemptRetry(TestAMRestart.java:392)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2615) ClientToAMTokenIdentifier and DelegationTokenIdentifier should allow extended fields

2014-10-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159776#comment-14159776
 ] 

Junping Du commented on YARN-2615:
--

Hi [~jianhe], new refactoring code sounds good. +1.

> ClientToAMTokenIdentifier and DelegationTokenIdentifier should allow extended 
> fields
> 
>
> Key: YARN-2615
> URL: https://issues.apache.org/jira/browse/YARN-2615
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-2615-v2.patch, YARN-2615-v3.patch, 
> YARN-2615-v4.patch, YARN-2615-v5.patch, YARN-2615.patch
>
>
> As three TokenIdentifiers get updated in YARN-668, ClientToAMTokenIdentifier 
> and DelegationTokenIdentifier should also be updated in the same way to allow 
> fields get extended in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-913) Add a way to register long-lived services in a YARN cluster

2014-10-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-913:

Attachment: YARN-913-018.patch

Patch -018
# fix {{ReservationSystemTestUtil}} where the {{RMContextImpl}} constructor 
needed an extra argument.
# TLA+ spec compiled to PDF; errors fixed

> Add a way to register long-lived services in a YARN cluster
> ---
>
> Key: YARN-913
> URL: https://issues.apache.org/jira/browse/YARN-913
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, resourcemanager
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 2014-09-03_Proposed_YARN_Service_Registry.pdf, 
> 2014-09-08_YARN_Service_Registry.pdf, RegistrationServiceDetails.txt, 
> YARN-913-001.patch, YARN-913-002.patch, YARN-913-003.patch, 
> YARN-913-003.patch, YARN-913-004.patch, YARN-913-006.patch, 
> YARN-913-007.patch, YARN-913-008.patch, YARN-913-009.patch, 
> YARN-913-010.patch, YARN-913-011.patch, YARN-913-012.patch, 
> YARN-913-013.patch, YARN-913-014.patch, YARN-913-015.patch, 
> YARN-913-016.patch, YARN-913-017.patch, YARN-913-018.patch, yarnregistry.pdf, 
> yarnregistry.pdf, yarnregistry.tla
>
>
> In a YARN cluster you can't predict where services will come up -or on what 
> ports. The services need to work those things out as they come up and then 
> publish them somewhere.
> Applications need to be able to find the service instance they are to bond to 
> -and not any others in the cluster.
> Some kind of service registry -in the RM, in ZK, could do this. If the RM 
> held the write access to the ZK nodes, it would be more secure than having 
> apps register with ZK themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-913) Add a way to register long-lived services in a YARN cluster

2014-10-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-913:

Attachment: yarnregistry.pdf

PDF of the -018 patch's TLA specification

> Add a way to register long-lived services in a YARN cluster
> ---
>
> Key: YARN-913
> URL: https://issues.apache.org/jira/browse/YARN-913
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, resourcemanager
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 2014-09-03_Proposed_YARN_Service_Registry.pdf, 
> 2014-09-08_YARN_Service_Registry.pdf, RegistrationServiceDetails.txt, 
> YARN-913-001.patch, YARN-913-002.patch, YARN-913-003.patch, 
> YARN-913-003.patch, YARN-913-004.patch, YARN-913-006.patch, 
> YARN-913-007.patch, YARN-913-008.patch, YARN-913-009.patch, 
> YARN-913-010.patch, YARN-913-011.patch, YARN-913-012.patch, 
> YARN-913-013.patch, YARN-913-014.patch, YARN-913-015.patch, 
> YARN-913-016.patch, YARN-913-017.patch, yarnregistry.pdf, yarnregistry.pdf, 
> yarnregistry.tla
>
>
> In a YARN cluster you can't predict where services will come up -or on what 
> ports. The services need to work those things out as they come up and then 
> publish them somewhere.
> Applications need to be able to find the service instance they are to bond to 
> -and not any others in the cluster.
> Some kind of service registry -in the RM, in ZK, could do this. If the RM 
> held the write access to the ZK nodes, it would be more secure than having 
> apps register with ZK themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-913) Add a way to register long-lived services in a YARN cluster

2014-10-05 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated YARN-913:

Attachment: YARN-913-017.patch

patch -017, no changes other than resync'd to trunk

> Add a way to register long-lived services in a YARN cluster
> ---
>
> Key: YARN-913
> URL: https://issues.apache.org/jira/browse/YARN-913
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: api, resourcemanager
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: 2014-09-03_Proposed_YARN_Service_Registry.pdf, 
> 2014-09-08_YARN_Service_Registry.pdf, RegistrationServiceDetails.txt, 
> YARN-913-001.patch, YARN-913-002.patch, YARN-913-003.patch, 
> YARN-913-003.patch, YARN-913-004.patch, YARN-913-006.patch, 
> YARN-913-007.patch, YARN-913-008.patch, YARN-913-009.patch, 
> YARN-913-010.patch, YARN-913-011.patch, YARN-913-012.patch, 
> YARN-913-013.patch, YARN-913-014.patch, YARN-913-015.patch, 
> YARN-913-016.patch, YARN-913-017.patch, yarnregistry.pdf, yarnregistry.tla
>
>
> In a YARN cluster you can't predict where services will come up -or on what 
> ports. The services need to work those things out as they come up and then 
> publish them somewhere.
> Applications need to be able to find the service instance they are to bond to 
> -and not any others in the cluster.
> Some kind of service registry -in the RM, in ZK, could do this. If the RM 
> held the write access to the ZK nodes, it would be more secure than having 
> apps register with ZK themselves.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-796) Allow for (admin) labels on nodes and resource-requests

2014-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159754#comment-14159754
 ] 

Hadoop QA commented on YARN-796:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12673025/YARN-796.node-label.consolidate.10.patch
  against trunk revision 16333b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 40 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1268 javac 
compiler warnings (more than the trunk's current 1267 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-tools/hadoop-sls hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

  org.apache.hadoop.mapred.pipes.TestPipeApplication
  org.apache.hadoop.yarn.api.TestPBImplRecords
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodeLabels
  
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservationQueue
  
org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacityReservationSystem
  
org.apache.hadoop.yarn.server.resourcemanager.reservation.TestNoOverCommitPolicy
  
org.apache.hadoop.yarn.server.resourcemanager.reservation.TestGreedyReservationAgent
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue
  
org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations
  
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation
  
org.apache.hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy

  The following test timeouts occurred in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-tools/hadoop-sls hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:

org.apache.hadoop.mapreduce.TestLargeSort
org.apache.hadoop.yarn.client.TestResourceTrackerOnHA
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart
org.apache.hadoop.yarn.server.resourcemanager.TestClientRMService
org.apache.hadoop.yarn.server.resourcemanager.TestRMHA

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5268//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5268//artifact/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5268//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5268//artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-common.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5268//artifact/patchprocess/diffJava

[jira] [Commented] (YARN-2493) [YARN-796] API changes for users

2014-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159747#comment-14159747
 ] 

Hadoop QA commented on YARN-2493:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673034/YARN-2493.patch
  against trunk revision 16333b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1290 javac 
compiler warnings (more than the trunk's current 1269 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5270//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5270//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5270//console

This message is automatically generated.

> [YARN-796] API changes for users
> 
>
> Key: YARN-2493
> URL: https://issues.apache.org/jira/browse/YARN-2493
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, 
> YARN-2493.patch, YARN-2493.patch
>
>
> This JIRA includes API changes for users of YARN-796, like changes in 
> {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
> part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2544) [YARN-796] Common server side PB changes (not include user API PB changes)

2014-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159748#comment-14159748
 ] 

Hadoop QA commented on YARN-2544:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673036/YARN-2544.patch
  against trunk revision 16333b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1290 javac 
compiler warnings (more than the trunk's current 1269 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5269//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-YARN-Build/5269//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5269//console

This message is automatically generated.

> [YARN-796] Common server side PB changes (not include user API PB changes)
> --
>
> Key: YARN-2544
> URL: https://issues.apache.org/jira/browse/YARN-2544
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2544.patch, YARN-2544.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2544) [YARN-796] Common server side PB changes (not include user API PB changes)

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2544:
-
Attachment: YARN-2544.patch

Attached new patch against trunk and address comments from [~vinodkv]

> [YARN-796] Common server side PB changes (not include user API PB changes)
> --
>
> Key: YARN-2544
> URL: https://issues.apache.org/jira/browse/YARN-2544
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2544.patch, YARN-2544.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2494) [YARN-796] Node label manager API and storage implementations

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2494:
-
Attachment: YARN-2494.patch

Addressed comments from [~vinodkv], and [~cwelch]. Attached new patch against 
trunk

> [YARN-796] Node label manager API and storage implementations
> -
>
> Key: YARN-2494
> URL: https://issues.apache.org/jira/browse/YARN-2494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, 
> YARN-2494.patch, YARN-2494.patch, YARN-2494.patch, YARN-2494.patch
>
>
> This JIRA includes APIs and storage implementations of node label manager,
> NodeLabelManager is an abstract class used to manage labels of nodes in the 
> cluster, it has APIs to query/modify
> - Nodes according to given label
> - Labels according to given hostname
> - Add/remove labels
> - Set labels of nodes in the cluster
> - Persist/recover changes of labels/labels-on-nodes to/from storage
> And it has two implementations to store modifications
> - Memory based storage: It will not persist changes, so all labels will be 
> lost when RM restart
> - FileSystem based storage: It will persist/recover to/from FileSystem (like 
> HDFS), and all labels and labels-on-nodes will be recovered upon RM restart



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2493) [YARN-796] API changes for users

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2493:
-
Attachment: (was: YARN-2493.patch)

> [YARN-796] API changes for users
> 
>
> Key: YARN-2493
> URL: https://issues.apache.org/jira/browse/YARN-2493
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, 
> YARN-2493.patch, YARN-2493.patch
>
>
> This JIRA includes API changes for users of YARN-796, like changes in 
> {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
> part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2493) [YARN-796] API changes for users

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2493:
-
Attachment: YARN-2493.patch

> [YARN-796] API changes for users
> 
>
> Key: YARN-2493
> URL: https://issues.apache.org/jira/browse/YARN-2493
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, 
> YARN-2493.patch, YARN-2493.patch
>
>
> This JIRA includes API changes for users of YARN-796, like changes in 
> {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
> part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2493) [YARN-796] API changes for users

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-2493:
-
Attachment: YARN-2493.patch

Attached new patch against trunk

> [YARN-796] API changes for users
> 
>
> Key: YARN-2493
> URL: https://issues.apache.org/jira/browse/YARN-2493
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-2493.patch, YARN-2493.patch, YARN-2493.patch, 
> YARN-2493.patch, YARN-2493.patch
>
>
> This JIRA includes API changes for users of YARN-796, like changes in 
> {{ResourceRequest}}, {{ApplicationSubmissionContext}}, etc. This is a common 
> part of YARN-796.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-10-05 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159730#comment-14159730
 ] 

Xuan Gong commented on YARN-1879:
-

+1 on Jian's proposal. We need to handle this scenario in failover/restart. But 
the Retry-cache method can not fully solve this problem. Handling it on RM side 
sounds like a better idea. We did similar things before, such as 
submitApplication.

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
> YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
> YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
> YARN-1879.21.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
> YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1879) Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol

2014-10-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159724#comment-14159724
 ] 

Jian He commented on YARN-1879:
---

 sorry for the late response. I re-thought about this problem, adding 
retry-cache here is to avoid ApplicationAlreadyRegistered exception. Instead of 
adding more implementation to make it AtMostOnce, can we change the protocol to 
simply accept duplicate register requests and mark the protocol as Idempotent ? 
 On failover with work-preserving restart, we are anyways doing re-register. 
Similarly, it should also be fine to just mark finishApplicationMaster as 
idempotent.  thoughts ?

> Mark Idempotent/AtMostOnce annotations to ApplicationMasterProtocol
> ---
>
> Key: YARN-1879
> URL: https://issues.apache.org/jira/browse/YARN-1879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Jian He
>Assignee: Tsuyoshi OZAWA
>Priority: Critical
> Attachments: YARN-1879.1.patch, YARN-1879.1.patch, 
> YARN-1879.11.patch, YARN-1879.12.patch, YARN-1879.13.patch, 
> YARN-1879.14.patch, YARN-1879.15.patch, YARN-1879.16.patch, 
> YARN-1879.17.patch, YARN-1879.18.patch, YARN-1879.19.patch, 
> YARN-1879.2-wip.patch, YARN-1879.2.patch, YARN-1879.20.patch, 
> YARN-1879.21.patch, YARN-1879.3.patch, YARN-1879.4.patch, YARN-1879.5.patch, 
> YARN-1879.6.patch, YARN-1879.7.patch, YARN-1879.8.patch, YARN-1879.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-796) Allow for (admin) labels on nodes and resource-requests

2014-10-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-796:

Attachment: YARN-796.node-label.consolidate.10.patch

Uploaded ver.10 patch

> Allow for (admin) labels on nodes and resource-requests
> ---
>
> Key: YARN-796
> URL: https://issues.apache.org/jira/browse/YARN-796
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.4.1
>Reporter: Arun C Murthy
>Assignee: Wangda Tan
> Attachments: LabelBasedScheduling.pdf, 
> Node-labels-Requirements-Design-doc-V1.pdf, 
> Node-labels-Requirements-Design-doc-V2.pdf, YARN-796-Diagram.pdf, 
> YARN-796.node-label.consolidate.1.patch, 
> YARN-796.node-label.consolidate.10.patch, 
> YARN-796.node-label.consolidate.2.patch, 
> YARN-796.node-label.consolidate.3.patch, 
> YARN-796.node-label.consolidate.4.patch, 
> YARN-796.node-label.consolidate.5.patch, 
> YARN-796.node-label.consolidate.6.patch, 
> YARN-796.node-label.consolidate.7.patch, 
> YARN-796.node-label.consolidate.8.patch, YARN-796.node-label.demo.patch.1, 
> YARN-796.patch, YARN-796.patch4
>
>
> It will be useful for admins to specify labels for nodes. Examples of labels 
> are OS, processor architecture etc.
> We should expose these labels and allow applications to specify labels on 
> resource-requests.
> Obviously we need to support admin operations on adding/removing node labels.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-2634) Test failure for TestClientRMTokens

2014-10-05 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved YARN-2634.
---
Resolution: Invalid

> Test failure for TestClientRMTokens
> ---
>
> Key: YARN-2634
> URL: https://issues.apache.org/jira/browse/YARN-2634
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Junping Du
>Assignee: Jian He
>Priority: Blocker
>
> The test get failed as below:
> {noformat}
> ---
> Test set: org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens
> ---
> Tests run: 6, Failures: 3, Errors: 2, Skipped: 0, Time elapsed: 60.184 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens
> testShortCircuitRenewCancelDifferentHostSamePort(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 22.693 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancelDifferentHostSamePort(TestClientRMTokens.java:272)
> testShortCircuitRenewCancelDifferentHostDifferentPort(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 20.087 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancelDifferentHostDifferentPort(TestClientRMTokens.java:283)
> testShortCircuitRenewCancel(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 0.031 sec  <<< ERROR!
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.getRmClient(RMDelegationTokenIdentifier.java:148)
> at 
> org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.renew(RMDelegationTokenIdentifier.java:101)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:309)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancel(TestClientRMTokens.java:241)
> testShortCircuitRenewCancelSameHostDifferentPort(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 0.061 sec  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at org.junit.Assert.assertEquals(Assert.java:144)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.checkShortCircuitRenewCancel(TestClientRMTokens.java:319)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens.testShortCircuitRenewCancelSameHostDifferentPort(TestClientRMTokens.java:261)
> testShortCircuitRenewCancelWildcardAddress(org.apache.hadoop.yarn.server.resourcemanager.TestClientRMTokens)
>   Time elapsed: 0.07 sec  <<< ERROR!
> java.lang.NullPointerException: null
> at org.apache.hadoop.net.NetUtils.isLocalAddress(NetUtils.java:684)
> at 
> org.apache.hadoop.yarn.security.client.RMDelegationTokenIdentifier$Renewer.getRmClient(RMDelegationTokenIdentifier.java:149)
>   
>   
>1,1   Top
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2615) ClientToAMTokenIdentifier and DelegationTokenIdentifier should allow extended fields

2014-10-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14159649#comment-14159649
 ] 

Hadoop QA commented on YARN-2615:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12673008/YARN-2615-v5.patch
  against trunk revision 16333b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/5267//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5267//console

This message is automatically generated.

> ClientToAMTokenIdentifier and DelegationTokenIdentifier should allow extended 
> fields
> 
>
> Key: YARN-2615
> URL: https://issues.apache.org/jira/browse/YARN-2615
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-2615-v2.patch, YARN-2615-v3.patch, 
> YARN-2615-v4.patch, YARN-2615-v5.patch, YARN-2615.patch
>
>
> As three TokenIdentifiers get updated in YARN-668, ClientToAMTokenIdentifier 
> and DelegationTokenIdentifier should also be updated in the same way to allow 
> fields get extended in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2615) ClientToAMTokenIdentifier and DelegationTokenIdentifier should allow extended fields

2014-10-05 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-2615:
--
Attachment: YARN-2615-v5.patch

patch looks good overall, refactored the code a bit myself on 
YarnDelegationTokenIdentifier and  the reader of 
NMTokenIdentifer/AMRMTokenIdentifier/ContainerTokenIdentifier, [~djp] could you 
take a look ?

> ClientToAMTokenIdentifier and DelegationTokenIdentifier should allow extended 
> fields
> 
>
> Key: YARN-2615
> URL: https://issues.apache.org/jira/browse/YARN-2615
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Blocker
> Attachments: YARN-2615-v2.patch, YARN-2615-v3.patch, 
> YARN-2615-v4.patch, YARN-2615-v5.patch, YARN-2615.patch
>
>
> As three TokenIdentifiers get updated in YARN-668, ClientToAMTokenIdentifier 
> and DelegationTokenIdentifier should also be updated in the same way to allow 
> fields get extended in future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2645) TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in trunk

2014-10-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-2645:
-
Description: 
>From https://builds.apache.org/job/Hadoop-Yarn-trunk/702/ :
{code}
testShouldNotCountFailureToMaxAttemptRetry(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
  Time elapsed: 46.182 sec  <<< FAILURE!
java.lang.AssertionError: AppAttempt state is not correct (timedout) 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:84)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:452)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAM(MockRM.java:617)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAndRegisterAM(MockRM.java:624)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForNewAMToLaunchAndRegister(MockRM.java:183)
at 
org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testShouldNotCountFailureToMaxAttemptRetry(TestAMRestart.java:392)
{code}
   Priority: Minor  (was: Major)

> TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in trunk
> ---
>
> Key: YARN-2645
> URL: https://issues.apache.org/jira/browse/YARN-2645
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Yarn-trunk/702/ :
> {code}
> testShouldNotCountFailureToMaxAttemptRetry(org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart)
>   Time elapsed: 46.182 sec  <<< FAILURE!
> java.lang.AssertionError: AppAttempt state is not correct (timedout) 
> expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockAM.waitForState(MockAM.java:84)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.sendAMLaunched(MockRM.java:452)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAM(MockRM.java:617)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.launchAndRegisterAM(MockRM.java:624)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForNewAMToLaunchAndRegister(MockRM.java:183)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart.testShouldNotCountFailureToMaxAttemptRetry(TestAMRestart.java:392)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-2645) TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in trunk

2014-10-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-2645:
-
Summary: TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in 
trunk  (was: TestAMRestarttestShouldNotCountFailureToMaxAttemptRetry)

> TestAMRestart#testShouldNotCountFailureToMaxAttemptRetry fails in trunk
> ---
>
> Key: YARN-2645
> URL: https://issues.apache.org/jira/browse/YARN-2645
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Ted Yu
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-2645) TestAMRestarttestShouldNotCountFailureToMaxAttemptRetry

2014-10-05 Thread Ted Yu (JIRA)
Ted Yu created YARN-2645:


 Summary: TestAMRestarttestShouldNotCountFailureToMaxAttemptRetry
 Key: YARN-2645
 URL: https://issues.apache.org/jira/browse/YARN-2645
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Ted Yu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)