[jira] [Created] (YARN-111) Application level priority in Resource Manager Schedulers

2012-09-19 Thread nemon lou (JIRA)
nemon lou created YARN-111:
--

 Summary: Application level priority in Resource Manager Schedulers
 Key: YARN-111
 URL: https://issues.apache.org/jira/browse/YARN-111
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: nemon lou


We need application level priority for Hadoop 2.0,both in FIFO scheduler and 
Capacity Scheduler.
In Hadoop 1.0.x,job priority is supported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-111) Application level priority in Resource Manager Schedulers

2012-09-19 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13458680#comment-13458680
 ] 

Robert Joseph Evans commented on YARN-111:
--

I thought that job priority was added into the 1.0 line a while ago, and then 
it was more or less removed.  You could still specify a priority, but the 
priority itself was ignored.  I am in favor of adding in priority again, but I 
just want to be sure that we address the reasons why it was removed/disabled in 
1.0.  Assuming that I am remembering things correctly. 

 Application level priority in Resource Manager Schedulers
 -

 Key: YARN-111
 URL: https://issues.apache.org/jira/browse/YARN-111
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: nemon lou

 We need application level priority for Hadoop 2.0,both in FIFO scheduler and 
 Capacity Scheduler.
 In Hadoop 1.0.x,job priority is supported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-106) Nodemanager needs to set permissions of local directories

2012-09-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13458761#comment-13458761
 ] 

Jason Lowe commented on YARN-106:
-

How will public files in the distributed cache work properly if the local 
directory doesn't have world access?  We're already locking down access to the 
directories within these top-level directories for files that aren't public, so 
even if they are left at 755 I'm not sure there is a real need to set them 
otherwise.  Maybe there's a valid use-case I'm missing?

I agree I'd rather avoid adding yet more configs, especially since it would be 
easy to configure a broken setup (e.g.: files put in the public cache but are 
inaccessible to many jobs or logs can't be served/aggregated by the 
nodemanager).


 Nodemanager needs to set permissions of local directories
 -

 Key: YARN-106
 URL: https://issues.apache.org/jira/browse/YARN-106
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-106.patch


 If the nodemanager process is running with a restrictive default umask (e.g.: 
 0077) then it will create its local directories with permissions that are too 
 restrictive to allow containers from other users to run.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (YARN-112) Race in localization can cause containers to fail

2012-09-19 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-112:
---

 Summary: Race in localization can cause containers to fail
 Key: YARN-112
 URL: https://issues.apache.org/jira/browse/YARN-112
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3
Reporter: Jason Lowe


On one of our 0.23 clusters, I saw a case of two containers, corresponding to 
two map tasks of a MR job, that were launched almost simultaneously on the same 
node.  It appears they both tried to localize job.jar and job.xml at the same 
time.  One of the containers failed when it couldn't rename the temporary 
job.jar directory to its final name because the target directory wasn't empty.  
Shortly afterwards the second container failed because job.xml could not be 
found, presumably because the first container removed it when it cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-112) Race in localization can cause containers to fail

2012-09-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13458799#comment-13458799
 ] 

Jason Lowe commented on YARN-112:
-

Here's the localization error that appeared in the nodemanager log when the 
first container failed:

{noformat}
 [Node Status Updater]2012-09-18 14:39:04,476 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource:
 Resource hdfs://xxx:xxx/user/somebody/.staging/job_1347923101942_0602/job.xml 
transitioned from DOWNLOADING to LOCALIZED
 [IPC Server handler 4 on 8040]2012-09-18 14:39:04,484 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 DEBUG: FAILED { 
hdfs://xxx:xxx/user/somebody/.staging/job_1347923101942_0602/job.jar, 
1347979129443, ARCHIVE }
 [IPC Server handler 3 on 8040]RemoteTrace: 
java.io.IOException: Rename cannot overwrite non empty destination directory 
/xxx/usercache/somebody/appcache/application_1347923101942_0602/filecache/3101732981627262626
at 
org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:706)
at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:221)
at 
org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:649)
at org.apache.hadoop.fs.FileContext.rename(FileContext.java:889)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:162)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:49)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
at LocalTrace: 
org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl: 
Rename cannot overwrite non empty destination directory 
/xxx/usercache/somebody/appcache/application_1347923101942_0602/filecache/3101732981627262626
at 
org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.convertFromProtoFormat(LocalResourceStatusPBImpl.java:217)
at 
org.apache.hadoop.yarn.server.nodemanager.api.protocolrecords.impl.pb.LocalResourceStatusPBImpl.getException(LocalResourceStatusPBImpl.java:147)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.update(ResourceLocalizationService.java:823)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker.processHeartbeat(ResourceLocalizationService.java:493)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.heartbeat(ResourceLocalizationService.java:222)
at 
org.apache.hadoop.yarn.server.nodemanager.api.impl.pb.service.LocalizationProtocolPBServiceImpl.heartbeat(LocalizationProtocolPBServiceImpl.java:46)
at 
org.apache.hadoop.yarn.proto.LocalizationProtocol$LocalizationProtocolService$2.callBlockingMethod(LocalizationProtocol.java:57)
at 
org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Server.call(ProtoOverHadoopRpcEngine.java:353)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1528)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1524)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1212)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1522)
2012-09-18 14:39:04,494 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: 
Container container_1347923101942_0602_01_16 transitioned from LOCALIZING 
to LOCALIZATION_FAILED
{noformat}

 Race in localization can cause containers to fail
 -

 Key: YARN-112
 URL: https://issues.apache.org/jira/browse/YARN-112
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3
Reporter: Jason Lowe

 On one of our 0.23 clusters, I saw a case of two containers, corresponding to 
 two map tasks of a MR job, that were launched almost simultaneously on the 
 same node.  It appears they both tried to localize job.jar and job.xml at the 
 same time.  One of the 

[jira] [Updated] (YARN-88) DefaultContainerExecutor can fail to set proper permissions

2012-09-19 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-88?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-88:
---

Attachment: YARN-88.patch

Updated the patch to have the container and temp directory use the same 
permissions as the appId directory.  This is what the LinuxContainerExecutor 
does.

 DefaultContainerExecutor can fail to set proper permissions
 ---

 Key: YARN-88
 URL: https://issues.apache.org/jira/browse/YARN-88
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-88.patch, YARN-88.patch


 {{DefaultContainerExecutor}} can fail to set the proper permissions on its 
 local directories if the cluster has been configured with a restrictive 
 umask, e.g.: fs.permissions.umask-mode=0077.  The configured umask ends up 
 defeating the permissions requested by {{DefaultContainerExecutor}} when it 
 creates directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-111) Application level priority in Resource Manager Schedulers

2012-09-19 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459016#comment-13459016
 ] 

Harsh J commented on YARN-111:
--

Robert,

I still see Job priority exist in MR1 (1.x). Which JIRA removed this, per your 
comment above? Or is this something CapacityScheduler specific we're discussing?

In YARN I see Priority coming in for generally all resource requests (which I 
assume does apply to the AM too) and hence 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.html#setPriority(org.apache.hadoop.yarn.api.records.Priority)
 ought to work, as the CS's LeafQueue does look at it?

 Application level priority in Resource Manager Schedulers
 -

 Key: YARN-111
 URL: https://issues.apache.org/jira/browse/YARN-111
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 2.0.1-alpha
Reporter: nemon lou

 We need application level priority for Hadoop 2.0,both in FIFO scheduler and 
 Capacity Scheduler.
 In Hadoop 1.0.x,job priority is supported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-88) DefaultContainerExecutor can fail to set proper permissions

2012-09-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459073#comment-13459073
 ] 

Siddharth Seth commented on YARN-88:


bq. It does make me wonder why we are explicitly granting group directory 
execute access to the appId directory. What does the nodemanager user need to 
access in there? Should we instead be locking down the 
usercache/${user}/appcache/${appId} to 700? If so, then we're OK using default 
permissions on the container and temp directories since the parent directory is 
locked down. If it is necessary for the appId directory to be 710, then it 
seems like the containerId should also be 710 and the temp directory should be 
700.

Good point, 700 does seem to be adequate. Can't think of where the NM may need 
group access. Needs some looking into.
For now, will look at and commit the new patch.

 DefaultContainerExecutor can fail to set proper permissions
 ---

 Key: YARN-88
 URL: https://issues.apache.org/jira/browse/YARN-88
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jason Lowe
Assignee: Jason Lowe
 Attachments: YARN-88.patch, YARN-88.patch


 {{DefaultContainerExecutor}} can fail to set the proper permissions on its 
 local directories if the cluster has been configured with a restrictive 
 umask, e.g.: fs.permissions.umask-mode=0077.  The configured umask ends up 
 defeating the permissions requested by {{DefaultContainerExecutor}} when it 
 creates directories.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-82) YARN local-dirs defaults to /tmp/nm-local-dir

2012-09-19 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-82?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459081#comment-13459081
 ] 

Siddharth Seth commented on YARN-82:


log-dirs could be ${yarn.log.dir}/app-logs or even ${yarn.log.dir}/userlogs 
like in 1.0 ?, since the property is used to configure the log location for 
containers run on behalf of a user.

 YARN local-dirs defaults to /tmp/nm-local-dir
 -

 Key: YARN-82
 URL: https://issues.apache.org/jira/browse/YARN-82
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.0-alpha
Reporter: Andy Isaacson
Assignee: Hemanth Yamijala
Priority: Minor
 Attachments: YARN-82.patch


 {{yarn.nodemanager.local-dirs}} defaults to {{/tmp/nm-local-dir}}.  It should 
 be {hadoop.tmp.dir}/nm-local-dir or similar.  Among other problems, this can 
 prevent multiple test clusters from starting on the same machine.
 Thanks to Hemanth Yamijala for reporting this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-53) Add protocol to YARN to support GetGroups

2012-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-53?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459104#comment-13459104
 ] 

Hadoop QA commented on YARN-53:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12545635/YARN-53-v4.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/44//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/44//console

This message is automatically generated.

 Add protocol to YARN to support GetGroups
 -

 Key: YARN-53
 URL: https://issues.apache.org/jira/browse/YARN-53
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Alejandro Abdelnur
Assignee: Bo Wang
  Labels: patch
 Fix For: 2.0.0-alpha

 Attachments: MAPREDUCE-4268.patch, YARN-53.patch, YARN-53-v2.patch, 
 YARN-53-v3.patch, YARN-53-v4.patch


 This is a regression from Hadoop1, as hadoop mrgroups fails with:
 {code}
 Exception in thread main java.lang.UnsupportedOperationException
   at 
 org.apache.hadoop.mapred.tools.GetGroups.getProtocolAddress(GetGroups.java:50)
   at 
 org.apache.hadoop.tools.GetGroupsBase.getUgmProtocol(GetGroupsBase.java:98)
   at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.mapred.tools.GetGroups.main(GetGroups.java:54)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-53) Add protocol to YARN to support GetGroups

2012-09-19 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-53?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459165#comment-13459165
 ] 

Alejandro Abdelnur commented on YARN-53:


+1. Seems much simpler this way (without having to do HADOOP-8805). I'll 
crosspost in HADOOP-8805 and wait for Suresh/Todd comments before committing.

 Add protocol to YARN to support GetGroups
 -

 Key: YARN-53
 URL: https://issues.apache.org/jira/browse/YARN-53
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Alejandro Abdelnur
Assignee: Bo Wang
  Labels: patch
 Fix For: 2.0.0-alpha

 Attachments: MAPREDUCE-4268.patch, YARN-53.patch, YARN-53-v2.patch, 
 YARN-53-v3.patch, YARN-53-v4.patch


 This is a regression from Hadoop1, as hadoop mrgroups fails with:
 {code}
 Exception in thread main java.lang.UnsupportedOperationException
   at 
 org.apache.hadoop.mapred.tools.GetGroups.getProtocolAddress(GetGroups.java:50)
   at 
 org.apache.hadoop.tools.GetGroupsBase.getUgmProtocol(GetGroupsBase.java:98)
   at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.mapred.tools.GetGroups.main(GetGroups.java:54)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-33) LocalDirsHandler should validate the configured local and log dirs

2012-09-19 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-33?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459187#comment-13459187
 ] 

Mayank Bansal commented on YARN-33:
---

Hi Sid,

Thanks for the review.

 I think it's better to fail early, rather than silently running a system 
 with fewer disks than intended. In this case, we should generate errors 
 instead of ignoring the badly configured dirs, or count the bad dirs towards 
 unhealthy disks. Thoughts ?

 I think we already do that right now. We have functions which removes the 
 unhealthy log dirs and local dirs from the list. I followed the same 
 model. I think its reasonable for the log/local dirs because people may 
 have added multiple but if one of them is good we should be able to make 
 cluster functioning and with the error message in the log that following 
 disks are in bad shape.
Thoughts?

The unit test doesn't really belong to TestDirectoryCollection. It can be in 
a separate class - something like TestLocalDirsHandlerService. Also I don't 
think the test needs to attempt deleting any files. Just verifying the number 
of dirs / expected dirs should be sufficient.

 I will create the new class however deleting was in attempt to demonstrate 
 that after this change there is no unwanted directories.

Additional unit tests, which verify alternate schems are handled as they 
should be.

 Agreed, will add the unit tests.

Thanks,
Mayank


 LocalDirsHandler should validate the configured local and log dirs
 --

 Key: YARN-33
 URL: https://issues.apache.org/jira/browse/YARN-33
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.0.2-alpha
Reporter: Mayank Bansal
Assignee: Mayank Bansal
 Attachments: YARN-33-trunk-v1.patch, YARN-33-trunk-v2.patch


 WHen yarn.nodemanager.log-dirs is with file:// URI then startup of node 
 manager creates the directory like file:// under CWD.
 WHich should not be there.
 Thanks,
 Mayank 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (YARN-53) Add protocol to YARN to support GetGroups

2012-09-19 Thread Bo Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-53?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bo Wang updated YARN-53:


Attachment: YARN-53-v5.patch

Added a testcase for GetGroups

 Add protocol to YARN to support GetGroups
 -

 Key: YARN-53
 URL: https://issues.apache.org/jira/browse/YARN-53
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Alejandro Abdelnur
Assignee: Bo Wang
  Labels: patch
 Fix For: 2.0.0-alpha

 Attachments: MAPREDUCE-4268.patch, YARN-53.patch, YARN-53-v2.patch, 
 YARN-53-v3.patch, YARN-53-v4.patch, YARN-53-v5.patch


 This is a regression from Hadoop1, as hadoop mrgroups fails with:
 {code}
 Exception in thread main java.lang.UnsupportedOperationException
   at 
 org.apache.hadoop.mapred.tools.GetGroups.getProtocolAddress(GetGroups.java:50)
   at 
 org.apache.hadoop.tools.GetGroupsBase.getUgmProtocol(GetGroupsBase.java:98)
   at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.mapred.tools.GetGroups.main(GetGroups.java:54)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (YARN-53) Add protocol to YARN to support GetGroups

2012-09-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-53?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13459234#comment-13459234
 ] 

Hadoop QA commented on YARN-53:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12545828/YARN-53-v5.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-YARN-Build/45//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/45//console

This message is automatically generated.

 Add protocol to YARN to support GetGroups
 -

 Key: YARN-53
 URL: https://issues.apache.org/jira/browse/YARN-53
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Alejandro Abdelnur
Assignee: Bo Wang
  Labels: patch
 Fix For: 2.0.0-alpha

 Attachments: MAPREDUCE-4268.patch, YARN-53.patch, YARN-53-v2.patch, 
 YARN-53-v3.patch, YARN-53-v4.patch, YARN-53-v5.patch


 This is a regression from Hadoop1, as hadoop mrgroups fails with:
 {code}
 Exception in thread main java.lang.UnsupportedOperationException
   at 
 org.apache.hadoop.mapred.tools.GetGroups.getProtocolAddress(GetGroups.java:50)
   at 
 org.apache.hadoop.tools.GetGroupsBase.getUgmProtocol(GetGroupsBase.java:98)
   at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
   at org.apache.hadoop.mapred.tools.GetGroups.main(GetGroups.java:54)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (YARN-84) Use Builder to get RPC server in YARN

2012-09-19 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-84?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved YARN-84.
-

   Resolution: Fixed
Fix Version/s: 3.0.0

 Use Builder to get RPC server in YARN
 -

 Key: YARN-84
 URL: https://issues.apache.org/jira/browse/YARN-84
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 3.0.0

 Attachments: MAPREDUCE-4628.patch


 In HADOOP-8736, a Builder is introduced to replace all the getServer() 
 variants. This JIRA is the change in YARN.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira