[jira] [Commented] (MAPREDUCE-5713) JavaDoc Fixes

2014-01-08 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13865558#comment-13865558
 ] 

Chen He commented on MAPREDUCE-5713:


[~ajisakaa] Thank you for the fix. I think this patch is ready for checking in. 

 JavaDoc Fixes
 -

 Key: MAPREDUCE-5713
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5713
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.2.1, 2.2.0
Reporter: Ben Robie
Assignee: Chen He
Priority: Trivial
 Attachments: MAPREDUCE-5713.patch, hadoop-10210.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 https://hadoop.apache.org/docs/r1.2.1/api/org/apache/hadoop/mapred/InputFormat.html
 Instead of record boundaries are to respected
 Should be record boundaries are to be respected
 https://hadoop.apache.org/docs/r1.2.1/api/org/apache/hadoop/mapred/JobConf.html
 Instead of some parameters interact subtly rest of the framework
 Should be some parameters interact subtly with the rest of the framework



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (MAPREDUCE-5618) Allow setting lineage information

2014-01-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved MAPREDUCE-5618.
-

Resolution: Duplicate

This is being fixed as part of MAPREDUCE-5699.

 Allow setting lineage information
 -

 Key: MAPREDUCE-5618
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5618
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mr-am
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla

 MR AM sets the applicationType to be MAPREDUCE. Downstream projects like 
 Pig, Hive, Oozie might want to set this to a different value for their 
 error-handling, query-tracking etc. Making this pluggable should help this 
 cause.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (MAPREDUCE-5022) Tasklogs disappear if JVM reuse is enabled

2014-01-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned MAPREDUCE-5022:
---

Assignee: (was: Karthik Kambatla)

 Tasklogs disappear if JVM reuse is enabled
 --

 Key: MAPREDUCE-5022
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5022
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: task
Affects Versions: 1.1.1
Reporter: Karthik Kambatla

 Can't see task logs when mapred.job.reuse.jvm.num.tasks is set to -1, but the 
 logs are visible when the same is set 1.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (MAPREDUCE-5652) ShuffleHandler should handle NM restarts

2014-01-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla reassigned MAPREDUCE-5652:
---

Assignee: Karthik Kambatla

 ShuffleHandler should handle NM restarts
 

 Key: MAPREDUCE-5652
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5652
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla
  Labels: shuffle

 ShuffleHandler should work across NM restarts and not require re-running 
 map-tasks. On NM restart, the map outputs are cleaned up requiring 
 re-execution of map tasks and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (MAPREDUCE-5714) TestMRAppComponentDependencies causes surefire to exit without saying proper goodbye

2014-01-08 Thread Jinghui Wang (JIRA)
Jinghui Wang created MAPREDUCE-5714:
---

 Summary: TestMRAppComponentDependencies causes surefire to exit 
without saying proper goodbye
 Key: MAPREDUCE-5714
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5714
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.2.0
 Environment: RHEL 6.4 
Reporter: Jinghui Wang
 Fix For: 2.2.0


When running test TestMRAppComponentDependencies, surefire exits with the 
following message: 

Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
(default-test) on project hadoop-mapreduce-client-app: ExecutionException; 
nested exception is java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: The forked VM terminated without saying properly 
goodbye. VM crash or System.exit called ?

The following code is found in o.a.h.mapreduce.v2.app.MRAppMaster#shutDownJob, 
which the test case inherits. So, before the test testComponentStopOrder 
completes in TestMRAppComponentDependencies, shutDownJob finishes executing and 
exits the JVM, thus causing the error. Based on the comment, the System.exit(0) 
is there as a workaround for before HADOOP-7140. Since the patch for 
HADOOP-7140 is committed in branch-2, are we safe to remove the System.exit 
call now.

//Bring the process down by force.
//Not needed after HADOOP-7140
LOG.info(Exiting MR AppMaster..GoodBye!);
sysexit();



 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (MAPREDUCE-5677) Hadoop 2.2 historyviewer report NPE when read 0.23 fail job's history

2014-01-08 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He reassigned MAPREDUCE-5677:
--

Assignee: Chen He

 Hadoop 2.2 historyviewer report NPE when read 0.23 fail job's history
 -

 Key: MAPREDUCE-5677
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5677
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Affects Versions: 2.2.0
Reporter: Chen He
Assignee: Chen He

 2013-12-10 12:49:39,394 WARN 
 org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
 java.lang.NullPointerException
 at 
 org.apache.hadoop.mapreduce.jobhistory.EventReader.fromAvro(EventReader.java:174)
 at 
 org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent.setDatum(TaskFailedEvent.java:111)
 at 
 org.apache.hadoop.mapreduce.jobhistory.EventReader.getNextEvent(EventReader.java:156)
 at 
 org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.parse(JobHistoryParser.java:111)
 at 
 org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.parse(JobHistoryParser.java:153)
 at 
 org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.parse(JobHistoryParser.java:139)
 at 
 org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:337)
 at 
 org.apache.hadoop.mapreduce.v2.hs.CompletedJob.init(CompletedJob.java:101)
 at 
 org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:410)
 at 
 org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:106)
 at 
 org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:137)
 at 
 org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:217)
 at 
 org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices.getJobFromJobIdString(AMWebServices.java:120)
 at 
 org.apache.hadoop.mapreduce.v2.hs.webapp.HsWebServices.getJob(HsWebServices.java:223)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
 at 
 com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
 at 
 com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
 at 
 com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
 at 
 com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
 at 
 com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
 at 
 com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
 at 
 com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 

[jira] [Updated] (MAPREDUCE-5707) JobClient does not allow setting RPC timeout for communications with JT/RM

2014-01-08 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated MAPREDUCE-5707:
--

Summary: JobClient does not allow setting RPC timeout for communications 
with JT/RM  (was: JobClient does not allow to setting RPC timeout for 
communications with JT/RM)

 JobClient does not allow setting RPC timeout for communications with JT/RM
 --

 Key: MAPREDUCE-5707
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5707
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 1.2.1, 2.2.0
Reporter: Gilad Wolff

 The ApplicationClientProtocolPBClientImpl c'tor (and the JobClient 0.20.2 
 c'tor as well) creates an rpc proxy that eventually uses '0' as the 
 rpcTimeout:
 {code}
   public ApplicationClientProtocolPBClientImpl(long clientVersion,
   InetSocketAddress addr, Configuration conf) throws IOException {
 RPC.setProtocolEngine(conf, ApplicationClientProtocolPB.class,
   ProtobufRpcEngine.class);
 proxy = RPC.getProxy(ApplicationClientProtocolPB.class, clientVersion, 
 addr, conf);
   }
 {code}
 which leads to this call in RPC:
 {code}
   public static T ProtocolProxyT getProtocolProxy(ClassT protocol,
 long clientVersion,
 InetSocketAddress addr,
 UserGroupInformation ticket,
 Configuration conf,
 SocketFactory factory) throws IOException {
 return getProtocolProxy(
 protocol, clientVersion, addr, ticket, conf, factory, 0, null);
 {code}
 (the '0' above is the rpc timeout).
 Clients should be able to specify the rpc timeout.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (MAPREDUCE-5707) JobClient does not allow to setting RPC timeout for communications with JT/RM

2014-01-08 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated MAPREDUCE-5707:
--

Summary: JobClient does not allow to setting RPC timeout for communications 
with JT/RM  (was: ApplicationClientProtocolPBClientImpl (and JobClient) does 
not allow to set rpcTimeout)

 JobClient does not allow to setting RPC timeout for communications with JT/RM
 -

 Key: MAPREDUCE-5707
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5707
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 1.2.1, 2.2.0
Reporter: Gilad Wolff

 The ApplicationClientProtocolPBClientImpl c'tor (and the JobClient 0.20.2 
 c'tor as well) creates an rpc proxy that eventually uses '0' as the 
 rpcTimeout:
 {code}
   public ApplicationClientProtocolPBClientImpl(long clientVersion,
   InetSocketAddress addr, Configuration conf) throws IOException {
 RPC.setProtocolEngine(conf, ApplicationClientProtocolPB.class,
   ProtobufRpcEngine.class);
 proxy = RPC.getProxy(ApplicationClientProtocolPB.class, clientVersion, 
 addr, conf);
   }
 {code}
 which leads to this call in RPC:
 {code}
   public static T ProtocolProxyT getProtocolProxy(ClassT protocol,
 long clientVersion,
 InetSocketAddress addr,
 UserGroupInformation ticket,
 Configuration conf,
 SocketFactory factory) throws IOException {
 return getProtocolProxy(
 protocol, clientVersion, addr, ticket, conf, factory, 0, null);
 {code}
 (the '0' above is the rpc timeout).
 Clients should be able to specify the rpc timeout.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (MAPREDUCE-5707) JobClient does not allow to setting RPC timeout for communications with JT/RM

2014-01-08 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza updated MAPREDUCE-5707:
--

Affects Version/s: (was: 0.20.2)
   1.2.1

 JobClient does not allow to setting RPC timeout for communications with JT/RM
 -

 Key: MAPREDUCE-5707
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5707
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: client
Affects Versions: 1.2.1, 2.2.0
Reporter: Gilad Wolff

 The ApplicationClientProtocolPBClientImpl c'tor (and the JobClient 0.20.2 
 c'tor as well) creates an rpc proxy that eventually uses '0' as the 
 rpcTimeout:
 {code}
   public ApplicationClientProtocolPBClientImpl(long clientVersion,
   InetSocketAddress addr, Configuration conf) throws IOException {
 RPC.setProtocolEngine(conf, ApplicationClientProtocolPB.class,
   ProtobufRpcEngine.class);
 proxy = RPC.getProxy(ApplicationClientProtocolPB.class, clientVersion, 
 addr, conf);
   }
 {code}
 which leads to this call in RPC:
 {code}
   public static T ProtocolProxyT getProtocolProxy(ClassT protocol,
 long clientVersion,
 InetSocketAddress addr,
 UserGroupInformation ticket,
 Configuration conf,
 SocketFactory factory) throws IOException {
 return getProtocolProxy(
 protocol, clientVersion, addr, ticket, conf, factory, 0, null);
 {code}
 (the '0' above is the rpc timeout).
 Clients should be able to specify the rpc timeout.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (MAPREDUCE-5626) TaskLogServlet could not get syslog

2014-01-08 Thread yangjun (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yangjun updated MAPREDUCE-5626:
---

Assignee: (was: yangjun)

 TaskLogServlet could not get syslog
 ---

 Key: MAPREDUCE-5626
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5626
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 1.2.1
 Environment: Linux version 2.6.18-238.9.1.el5
 Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
 hadoop-1.2.1
Reporter: yangjun
Priority: Minor
  Labels: patch
 Fix For: 1.2.1

   Original Estimate: 2h
  Remaining Estimate: 2h

 When multiply tasks use one jvm and generated logs.
 eg.
 ./attempt_201211220735_0001_m_00_0:
 log.index
 ./attempt_201211220735_0001_m_01_0:
 log.index
 ./attempt_201211220735_0001_m_02_0:
 log.index  stderr  stdout  syslog
 get from http://:50060/tasklog?attemptid= 
 attempt_201211220735_0001_m_00_0 
 could get stderr,stdout,but not the others,include syslog.
 see TaskLogServlet.haveTaskLog() method, not check from local  log.index, 
 but check the original path.
 resolve:
 modify TaskLogServlet haveTaskLog method
 private boolean haveTaskLog(TaskAttemptID taskId, boolean isCleanup,  
 TaskLog.LogName type) throws IOException {  
 File f = TaskLog.getTaskLogFile(taskId, isCleanup, type);  
 if (f.exists()  f.canRead()) {  
 return true;  
 } else {  
 File indexFile = TaskLog.getIndexFile(taskId, isCleanup);  
 if (!indexFile.exists()) {  
 return false;  
 }  


 BufferedReader fis;  
 try {  
 fis = new BufferedReader(new InputStreamReader(  
 SecureIOUtils.openForRead(indexFile,  
 TaskLog.obtainLogDirOwner(taskId;  
 } catch (FileNotFoundException ex) {  
 LOG.warn(Index file for the log of  + taskId  
 +  does not exist.);  


 // Assume no task reuse is used and files exist on attemptdir 
  
 StringBuffer input = new StringBuffer();  
 input.append(LogFileDetail.LOCATION  
 + TaskLog.getAttemptDir(taskId, isCleanup) + \n);  
 for (LogName logName : TaskLog.LOGS_TRACKED_BY_INDEX_FILES) { 
  
 input.append(logName + :0 -1\n);  
 }  
 fis = new BufferedReader(new StringReader(input.toString())); 
  
 }  


 try {  
 String str = fis.readLine();  
 if (str == null) { // thefile doesn't have anything  
 throw new IOException(Index file for the log of  + 
 taskId  
 + is empty.);  
 }  
 String loc = 
 str.substring(str.indexOf(LogFileDetail.LOCATION)  
 + LogFileDetail.LOCATION.length());  
 File tf = new File(loc, type.toString());  
 return tf.exists()  tf.canRead();  


 } finally {  
 if (fis != null)  
 fis.close();  
 }  
 }  


 }  
 workaround:
 url add filter=SYSLOG could print syslog also.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (MAPREDUCE-5397) AM crashes because Webapp failed to start on multi node cluster

2014-01-08 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He resolved MAPREDUCE-5397.


Resolution: Cannot Reproduce

 AM crashes because Webapp failed to start on multi node cluster
 ---

 Key: MAPREDUCE-5397
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5397
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Jian He
Assignee: Jian He
 Attachments: log.txt


 I set up a 12 nodes cluster and tried submitting jobs but get this exception.
 But job is able to succeed after AM crashes and retry a few times(2 or 3)
 {code}
 2013-07-12 18:56:28,438 INFO [main] org.mortbay.log: Extract 
 jar:file:/grid/0/dev/jhe/hadoop-2.1.0-beta/share/hadoop/yarn/hadoop-yarn-common-2.1.0-beta.jar!/webapps/mapreduce
  to /tmp/Jetty_0_0_0_0_43554_mapreduceljbmlg/webapp
 2013-07-12 18:56:28,528 WARN [main] org.mortbay.log: Failed startup of 
 context 
 org.mortbay.jetty.webapp.WebAppContext@2726b2{/,jar:file:/grid/0/dev/jhe/hadoop-2.1.0-beta/share/hadoop/yarn/hadoop-yarn-common-2.1.0-beta.jar!/webapps/mapreduce}
 java.io.FileNotFoundException: 
 /tmp/Jetty_0_0_0_0_43554_mapreduceljbmlg/webapp/webapps/mapreduce/.keep 
 (No such file or directory)
   at java.io.FileOutputStream.open(Native Method)
   at java.io.FileOutputStream.init(FileOutputStream.java:194)
   at java.io.FileOutputStream.init(FileOutputStream.java:145)
   at org.mortbay.resource.JarResource.extract(JarResource.java:215)
   at 
 org.mortbay.jetty.webapp.WebAppContext.resolveWebApp(WebAppContext.java:974)
   at 
 org.mortbay.jetty.webapp.WebAppContext.getWebInf(WebAppContext.java:832)
   at 
 org.mortbay.jetty.webapp.WebInfConfiguration.configureClassLoader(WebInfConfiguration.java:62)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:489)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer.start(HttpServer.java:684)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:211)
   at 
 org.apache.hadoop.mapreduce.v2.app.client.MRClientService.serviceStart(MRClientService.java:134)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:101)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1019)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.run(MRAppMaster.java:1394)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
   at 
 org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1390)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (MAPREDUCE-5638) Port Hadoop Archives document to trunk

2014-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5638:
-

Summary: Port Hadoop Archives document to trunk  (was: Convert Hadoop 
Archives document to APT)

 Port Hadoop Archives document to trunk
 --

 Key: MAPREDUCE-5638
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5638
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: MAPREDUCE-5638-md.patch, MAPREDUCE-5638.patch


 Convert Hadoop Archives document from forrest to APT.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (MAPREDUCE-5638) Convert Hadoop Archives document to APT

2014-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5638:
-

Attachment: MAPREDUCE-5638-md.patch

Attaching a patch to apply markdown style. Thanks [~ste...@apache.org]!

 Convert Hadoop Archives document to APT
 ---

 Key: MAPREDUCE-5638
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5638
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: MAPREDUCE-5638-md.patch, MAPREDUCE-5638.patch


 Convert Hadoop Archives document from forrest to APT.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (MAPREDUCE-5638) Port Hadoop Archives document to trunk

2014-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated MAPREDUCE-5638:
-

Description: Now Hadoop Archive document exists only in branch-1. Let's 
port Hadoop Archives document to trunk.  (was: Convert Hadoop Archives document 
from forrest to APT.)

 Port Hadoop Archives document to trunk
 --

 Key: MAPREDUCE-5638
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5638
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: MAPREDUCE-5638-md.patch, MAPREDUCE-5638.patch


 Now Hadoop Archive document exists only in branch-1. Let's port Hadoop 
 Archives document to trunk.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (MAPREDUCE-5638) Port Hadoop Archives document to trunk

2014-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866424#comment-13866424
 ] 

Hadoop QA commented on MAPREDUCE-5638:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12622128/MAPREDUCE-5638-md.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4308//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4308//console

This message is automatically generated.

 Port Hadoop Archives document to trunk
 --

 Key: MAPREDUCE-5638
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5638
 Project: Hadoop Map/Reduce
  Issue Type: Sub-task
  Components: documentation
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: MAPREDUCE-5638-md.patch, MAPREDUCE-5638.patch


 Now Hadoop Archive document exists only in branch-1. Let's port Hadoop 
 Archives document to trunk.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)