[jira] [Assigned] (MAPREDUCE-7178) NPE while YarnChild shudown

2019-01-25 Thread Tsuyoshi Ozawa (JIRA)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned MAPREDUCE-7178:
-

Assignee: lujie

> NPE while YarnChild shudown
> ---
>
> Key: MAPREDUCE-7178
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7178
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: lujie
>Assignee: lujie
>Priority: Major
> Attachments: MR-7178_1.patch, yarnchild.log
>
>
> In YarnChild.main
> {code:java}
> try{
>  logSyncer = TaskLog.createLogSyncer();//line 168
>  
>  taskFinal.run(job, umbilical); //line 178
> }catch (Exception exception) {//line 187
>   LOG.warn("Exception running child : "
>+ StringUtils.stringifyException(exception));
>.
>task.taskCleanup(umbilical);// line 200
> }{code}
> At line 178. it will initialize the task.committer, but the line168 may throw 
> exception, it will skip  initialize the task.committer, hence task.committer 
> == null. Line 187 will catch this exception and do clean up(line 200), code 
> line 200 will use  task.committer without null check, hence NPE happens
> {code:java}
> 2019-01-23 16:59:42,864 INFO [main] org.apache.hadoop.mapred.YarnChild: 
> Exception cleaning up: java.lang.NullPointerException
> at org.apache.hadoop.mapred.Task.taskCleanup(Task.java:1458)
> at org.apache.hadoop.mapred.YarnChild$3.run(YarnChild.java:200)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:197)
> {code}
> So why  line168 may throw exception, below log give a example:
> {code:java}
> 2019-01-23 16:59:42,857 WARN [main] org.apache.hadoop.mapred.YarnChild: 
> Exception running child : java.lang.IllegalStateException: Shutdown in 
> progress, cannot add a shutdownHook
> at 
> org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:299)
> at org.apache.hadoop.mapred.TaskLog.createLogSyncer(TaskLog.java:340)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (MAPREDUCE-7148) Fast fail jobs when exceeds dfs quota limitation

2018-10-05 Thread Tsuyoshi Ozawa (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639328#comment-16639328
 ] 

Tsuyoshi Ozawa edited comment on MAPREDUCE-7148 at 10/5/18 6:25 AM:


[~tiana528] It sounds good to me specify FQCNs via configurations if it works. 
I made you assignee of this task. When you'are updating your patch, please add 
test code to verify your modification works correctly. 


was (Author: ozawa):
[~tiana528] It sounds good to me specify FQCNs via configurations if it works. 
I made you assignee of this task. When you'are updating your patch, please add 
test code to verify your modification work correctly. 

> Fast fail jobs when exceeds dfs quota limitation
> 
>
> Key: MAPREDUCE-7148
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7148
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: task
>Affects Versions: 2.7.0, 2.8.0, 2.9.0
> Environment: hadoop 2.7.3
>Reporter: Wang Yan
>Assignee: Wang Yan
>Priority: Major
> Attachments: MAPREDUCE-7148.001.patch
>
>
> We are running hive jobs with a DFS quota limitation per job(3TB). If a job 
> hits DFS quota limitation, the task that hit it will fail and there will be a 
> few task reties before the job actually fails. The retry is not very helpful 
> because the job will always fail anyway. In some worse cases, we have a job 
> which has a single reduce task writing more than 3TB to HDFS over 20 hours, 
> the reduce task exceeds the quota limitation and retries 4 times until the 
> job fails in the end thus consuming a lot of unnecessary resource. This 
> ticket aims at providing the feature to let a job fail fast when it writes 
> too much data to the DFS and exceeds the DFS quota limitation. The fast fail 
> feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (MAPREDUCE-7148) Fast fail jobs when exceeds dfs quota limitation

2018-10-05 Thread Tsuyoshi Ozawa (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639328#comment-16639328
 ] 

Tsuyoshi Ozawa edited comment on MAPREDUCE-7148 at 10/5/18 6:26 AM:


[~tiana528] It sounds good to me specify FQCNs via configurations if it works. 
I made you assignee of this task. When you'are updating your patch, please add 
test code to verify that your modification works correctly. 


was (Author: ozawa):
[~tiana528] It sounds good to me specify FQCNs via configurations if it works. 
I made you assignee of this task. When you'are updating your patch, please add 
test code to verify your modification works correctly. 

> Fast fail jobs when exceeds dfs quota limitation
> 
>
> Key: MAPREDUCE-7148
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7148
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: task
>Affects Versions: 2.7.0, 2.8.0, 2.9.0
> Environment: hadoop 2.7.3
>Reporter: Wang Yan
>Assignee: Wang Yan
>Priority: Major
> Attachments: MAPREDUCE-7148.001.patch
>
>
> We are running hive jobs with a DFS quota limitation per job(3TB). If a job 
> hits DFS quota limitation, the task that hit it will fail and there will be a 
> few task reties before the job actually fails. The retry is not very helpful 
> because the job will always fail anyway. In some worse cases, we have a job 
> which has a single reduce task writing more than 3TB to HDFS over 20 hours, 
> the reduce task exceeds the quota limitation and retries 4 times until the 
> job fails in the end thus consuming a lot of unnecessary resource. This 
> ticket aims at providing the feature to let a job fail fast when it writes 
> too much data to the DFS and exceeds the DFS quota limitation. The fast fail 
> feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Assigned] (MAPREDUCE-7148) Fast fail jobs when exceeds dfs quota limitation

2018-10-05 Thread Tsuyoshi Ozawa (JIRA)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned MAPREDUCE-7148:
-

Assignee: Wang Yan

> Fast fail jobs when exceeds dfs quota limitation
> 
>
> Key: MAPREDUCE-7148
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7148
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: task
>Affects Versions: 2.7.0, 2.8.0, 2.9.0
> Environment: hadoop 2.7.3
>Reporter: Wang Yan
>Assignee: Wang Yan
>Priority: Major
> Attachments: MAPREDUCE-7148.001.patch
>
>
> We are running hive jobs with a DFS quota limitation per job(3TB). If a job 
> hits DFS quota limitation, the task that hit it will fail and there will be a 
> few task reties before the job actually fails. The retry is not very helpful 
> because the job will always fail anyway. In some worse cases, we have a job 
> which has a single reduce task writing more than 3TB to HDFS over 20 hours, 
> the reduce task exceeds the quota limitation and retries 4 times until the 
> job fails in the end thus consuming a lot of unnecessary resource. This 
> ticket aims at providing the feature to let a job fail fast when it writes 
> too much data to the DFS and exceeds the DFS quota limitation. The fast fail 
> feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-7148) Fast fail jobs when exceeds dfs quota limitation

2018-10-05 Thread Tsuyoshi Ozawa (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639328#comment-16639328
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-7148:
---

[~tiana528] It sounds good to me specify FQCNs via configurations if it works. 
I made you assignee of this task. When you'are updating your patch, please add 
test code to verify your modification work correctly. 

> Fast fail jobs when exceeds dfs quota limitation
> 
>
> Key: MAPREDUCE-7148
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7148
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: task
>Affects Versions: 2.7.0, 2.8.0, 2.9.0
> Environment: hadoop 2.7.3
>Reporter: Wang Yan
>Assignee: Wang Yan
>Priority: Major
> Attachments: MAPREDUCE-7148.001.patch
>
>
> We are running hive jobs with a DFS quota limitation per job(3TB). If a job 
> hits DFS quota limitation, the task that hit it will fail and there will be a 
> few task reties before the job actually fails. The retry is not very helpful 
> because the job will always fail anyway. In some worse cases, we have a job 
> which has a single reduce task writing more than 3TB to HDFS over 20 hours, 
> the reduce task exceeds the quota limitation and retries 4 times until the 
> job fails in the end thus consuming a lot of unnecessary resource. This 
> ticket aims at providing the feature to let a job fail fast when it writes 
> too much data to the DFS and exceeds the DFS quota limitation. The fast fail 
> feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-7148) Fast fail jobs when exceeds dfs quota limitation

2018-10-04 Thread Tsuyoshi Ozawa (JIRA)


[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16639232#comment-16639232
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-7148:
---

[~tiana528], thanks you for suggesting the awesome feature. If I understand 
correctly, you suggest that deterministic failures, in this case DFS quota 
limitation, should not retry the task again and again. I think the feature 
itself is useful. 

At the patch-level design, hadoop-mapreduce-client-app should not depend on 
hadoop-hdfs directly. Is it possible to rewrite it with the exception at 
FileSystem-level API?

cc: [~ste...@apache.org] do you have any idea to handle this kind of quota 
error without breaking FileSystem abstraction? One possible solution is to 
handle and parse an error message via FileSystem-API's IOException provided 
from dfs' QuotaExceeded exception, but it must be dirty.

> Fast fail jobs when exceeds dfs quota limitation
> 
>
> Key: MAPREDUCE-7148
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7148
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: task
>Affects Versions: 2.7.0, 2.8.0, 2.9.0
> Environment: hadoop 2.7.3
>Reporter: Wang Yan
>Priority: Major
> Attachments: MAPREDUCE-7148.001.patch
>
>
> We are running hive jobs with a DFS quota limitation per job(3TB). If a job 
> hits DFS quota limitation, the task that hit it will fail and there will be a 
> few task reties before the job actually fails. The retry is not very helpful 
> because the job will always fail anyway. In some worse cases, we have a job 
> which has a single reduce task writing more than 3TB to HDFS over 20 hours, 
> the reduce task exceeds the quota limitation and retries 4 times until the 
> job fails in the end thus consuming a lot of unnecessary resource. This 
> ticket aims at providing the feature to let a job fail fast when it writes 
> too much data to the DFS and exceeds the DFS quota limitation. The fast fail 
> feature is introduced in MAPREDUCE-7022 and MAPREDUCE-6489 .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Moved] (MAPREDUCE-7107) NPE on MapReduce AM leaves the job in an inconsistent state

2018-06-07 Thread Tsuyoshi Ozawa (JIRA)


 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-7107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa moved HADOOP-15517 to MAPREDUCE-7107:


Affects Version/s: (was: 2.7.3)
   2.7.3
  Key: MAPREDUCE-7107  (was: HADOOP-15517)
  Project: Hadoop Map/Reduce  (was: Hadoop Common)

> NPE on MapReduce AM leaves the job in an inconsistent state
> ---
>
> Key: MAPREDUCE-7107
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7107
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.7.3
>Reporter: Gonzalo Herreros
>Priority: Major
>
> On AWS, running a MapReduce job, one of the nodes died and was decommissioned.
> However, the AM doesn't seem to handle that well and the job didn't complete 
> mappers correctly from there.
> {code:java}
> 2018-06-07 14:29:08,686 ERROR [RMCommunicator Allocator] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: ERROR IN CONTACTING RM. 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.handleUpdatedNodes(RMContainerAllocator.java:879)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.getResources(RMContainerAllocator.java:779)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.heartbeat(RMContainerAllocator.java:259)
>   at 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator$AllocatorRunnable.run(RMCommunicator.java:281)
>   at java.lang.Thread.run(Thread.java:748)
> 2018-06-07 14:29:08,686 INFO [IPC Server handler 4 on 46577] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents request 
> from attempt_1528378746527_0011_r_000553_0. startIndex 13112 maxEvents 1453
> 2018-06-07 14:29:08,686 ERROR [AsyncDispatcher event handler] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$UpdatedNodesTransition.transition(JobImpl.java:2162)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl$UpdatedNodesTransition.transition(JobImpl.java:2155)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
>   at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:997)
>   at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:139)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1346)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1342)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> 2018-06-07 14:29:08,688 INFO [AsyncDispatcher ShutDown handler] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-07-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15391345#comment-15391345
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-4522:
---

[~shyam_gav] Thanks for updating and sorry for my delay. Following lines 
exceeds 80LOC, so could you fix them?

{quote}
+  List subLists = Lists.partition(records, 
context.getConfiguration().getInt(MR_DB_OUTPUTFORMAT_BATCH_SIZE,1000));
{quote}

{quote}
+  public static final String 
MR_DB_OUTPUTFORMAT_BATCH_SIZE="mapreduce.output.dboutputformat.batch-size";
{quote}

{quote}
+  The batch size of SQL statements that will be executed before 
reporting progress. Default is 1000

{quote}

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>Assignee: Shyam Gavulla
>  Labels: newbie
> Attachments: MAPREDUCE-4522.001.patch
>
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6729) Accurately compute the test execute time in DFSIO

2016-07-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367288#comment-15367288
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6729:
---

Ah, I got why BOT fails to fetch your change: the BOT tries to fetch from your 
PR, instead of patch file itself.

I found that you need to rebase your change on trunk. Please push it to your 
github branch.
https://github.com/apache/hadoop/pull/111/commits



> Accurately compute the test execute time in DFSIO
> -
>
> Key: MAPREDUCE-6729
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6729
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: benchmarks, performance, test
>Affects Versions: 2.9.0
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Minor
>  Labels: performance, test
> Attachments: MAPREDUCE-6729.001.patch
>
>
> When doing DFSIO test as a distributed i/o benchmark tool. Then especially 
> writes plenty of files to disk or read from, both can cause performance issue 
> and imprecise value in a way. The question is that existing practices needs 
> to delete files when before running a job and that will cause extra time 
> consumption and furthermore cause performance issue, statistical time error 
> and imprecise throughput while the files are lots of. So we need to replace 
> or improve this hack to prevent this from happening in the future.
> {code}
> public static void testWrite() throws Exception {
> FileSystem fs = cluster.getFileSystem();
> long tStart = System.currentTimeMillis();
> bench.writeTest(fs); // this line of code will cause extra time 
> consumption because of fs.delete(*,*) by the writeTest method
> long execTime = System.currentTimeMillis() - tStart;
> bench.analyzeResult(fs, TestType.TEST_TYPE_WRITE, execTime);
>   }
> private void writeTest(FileSystem fs) throws IOException {
>   Path writeDir = getWriteDir(config);
>   fs.delete(getDataDir(config), true);
>   fs.delete(writeDir, true);
>   runIOTest(WriteMapper.class, writeDir);
>   }
> {code} 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6729) Accurately compute the test execute time in DFSIO

2016-07-07 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6729:
--
Status: Patch Available  (was: Open)

> Accurately compute the test execute time in DFSIO
> -
>
> Key: MAPREDUCE-6729
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6729
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: benchmarks, performance, test
>Affects Versions: 2.9.0
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Minor
>  Labels: performance, test
> Attachments: MAPREDUCE-6729.001.patch
>
>
> When doing DFSIO test as a distributed i/o benchmark tool. Then especially 
> writes plenty of files to disk or read from, both can cause performance issue 
> and imprecise value in a way. The question is that existing practices needs 
> to delete files when before running a job and that will cause extra time 
> consumption and furthermore cause performance issue, statistical time error 
> and imprecise throughput while the files are lots of. So we need to replace 
> or improve this hack to prevent this from happening in the future.
> {code}
> public static void testWrite() throws Exception {
> FileSystem fs = cluster.getFileSystem();
> long tStart = System.currentTimeMillis();
> bench.writeTest(fs); // this line of code will cause extra time 
> consumption because of fs.delete(*,*) by the writeTest method
> long execTime = System.currentTimeMillis() - tStart;
> bench.analyzeResult(fs, TestType.TEST_TYPE_WRITE, execTime);
>   }
> private void writeTest(FileSystem fs) throws IOException {
>   Path writeDir = getWriteDir(config);
>   fs.delete(getDataDir(config), true);
>   fs.delete(writeDir, true);
>   runIOTest(WriteMapper.class, writeDir);
>   }
> {code} 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6729) Accurately compute the test execute time in DFSIO

2016-07-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367188#comment-15367188
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6729:
---

FYI, how to contribute page on Hadoop wiki is also useful.  
http://wiki.apache.org/hadoop/HowToContribute



> Accurately compute the test execute time in DFSIO
> -
>
> Key: MAPREDUCE-6729
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6729
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: benchmarks, performance, test
>Affects Versions: 2.9.0
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Minor
>  Labels: performance, test
> Attachments: MAPREDUCE-6729-v1.patch
>
>
> When doing DFSIO test as a distributed i/o benchmark tool. Then especially 
> writes plenty of files to disk or read from, both can cause performance issue 
> and imprecise value in a way. The question is that existing practices needs 
> to delete files when before running a job and that will cause extra time 
> consumption and furthermore cause performance issue, statistical time error 
> and imprecise throughput while the files are lots of. So we need to replace 
> or improve this hack to prevent this from happening in the future.
> {code}
> public static void testWrite() throws Exception {
> FileSystem fs = cluster.getFileSystem();
> long tStart = System.currentTimeMillis();
> bench.writeTest(fs); // this line of code will cause extra time 
> consumption because of fs.delete(*,*) by the writeTest method
> long execTime = System.currentTimeMillis() - tStart;
> bench.analyzeResult(fs, TestType.TEST_TYPE_WRITE, execTime);
>   }
> private void writeTest(FileSystem fs) throws IOException {
>   Path writeDir = getWriteDir(config);
>   fs.delete(getDataDir(config), true);
>   fs.delete(writeDir, true);
>   runIOTest(WriteMapper.class, writeDir);
>   }
> {code} 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6729) Accurately compute the test execute time in DFSIO

2016-07-07 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15367183#comment-15367183
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6729:
---

[~drankye] Sure, I will take a look.

[~minglei] I think the patch cannot be applied because the patch is generated 
at not root dir of Hadoop source code tree. 

{quote}
 .../test/java/org/apache/hadoop/fs/TestDFSIO.java  | 54 +++---
{quote}

Please generate the patch on the root directory of hadoop source code tree by 
using git diff command:

{code}
~workplace/hadoop$ git diff --no-prefix (latest commit) > 
MAPREDUCE-6729.001.patch
{code}


> Accurately compute the test execute time in DFSIO
> -
>
> Key: MAPREDUCE-6729
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6729
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: benchmarks, performance, test
>Affects Versions: 2.9.0
>Reporter: mingleizhang
>Assignee: mingleizhang
>Priority: Minor
>  Labels: performance, test
> Attachments: MAPREDUCE-6729-v1.patch
>
>
> When doing DFSIO test as a distributed i/o benchmark tool. Then especially 
> writes plenty of files to disk or read from, both can cause performance issue 
> and imprecise value in a way. The question is that existing practices needs 
> to delete files when before running a job and that will cause extra time 
> consumption and furthermore cause performance issue, statistical time error 
> and imprecise throughput while the files are lots of. So we need to replace 
> or improve this hack to prevent this from happening in the future.
> {code}
> public static void testWrite() throws Exception {
> FileSystem fs = cluster.getFileSystem();
> long tStart = System.currentTimeMillis();
> bench.writeTest(fs); // this line of code will cause extra time 
> consumption because of fs.delete(*,*) by the writeTest method
> long execTime = System.currentTimeMillis() - tStart;
> bench.analyzeResult(fs, TestType.TEST_TYPE_WRITE, execTime);
>   }
> private void writeTest(FileSystem fs) throws IOException {
>   Path writeDir = getWriteDir(config);
>   fs.delete(getDataDir(config), true);
>   fs.delete(writeDir, true);
>   runIOTest(WriteMapper.class, writeDir);
>   }
> {code} 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Work started] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2016-06-28 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on MAPREDUCE-5221 started by Tsuyoshi Ozawa.
-
> Reduce side Combiner is not used when using the new API
> ---
>
> Key: MAPREDUCE-5221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.4-alpha
>Reporter: Siddharth Seth
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.10.patch, 
> MAPREDUCE-5221.11.patch, MAPREDUCE-5221.2.patch, MAPREDUCE-5221.3.patch, 
> MAPREDUCE-5221.4.patch, MAPREDUCE-5221.5.patch, MAPREDUCE-5221.6.patch, 
> MAPREDUCE-5221.7-2.patch, MAPREDUCE-5221.7.patch, MAPREDUCE-5221.8.patch, 
> MAPREDUCE-5221.9.patch
>
>
> If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
> will silently ignored on the reduce side since the reduce side usage is only 
> aware of the old api combiner.
> This doesn't fail the job - since the new combiner key does not deprecate the 
> old key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6704) Container fail to launch for mapred application

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351973#comment-15351973
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6704:
---

[~ajisakaa] 

{quote}
Agreed. If we are going to do this, we need to revert the documentation of 
MAPREDUCE-6702 and update the release note. Would you revert the documentation 
change in this issue?
{quote}

My previous comment means I don't agree with the point. My thought is that it 
is not straight forward, and confusing. Hence, I think it should be {{ 
"HADOOP_MAPRED_HOME="Apps.crossPlatformify("HADOOP_MAPRED_HOME");}}. Could you 
tell me your thoughts?

> Container fail to launch for mapred application
> ---
>
> Key: MAPREDUCE-6704
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6704
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6704.patch, 0001-YARN-5026.patch
>
>
> Container fail to launch for mapred application.
> As part for launch script {{HADOOP_MAPRED_HOME}} default value is not set 
> .After 
> https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
>{{HADOOP_MAPRED_HOME}} is not able to get from {{builder.environment()}} 
> since {{DefaultContainerExecutor#buildCommandExecutor}} sets inherit to false.
> {noformat}
> 16/05/02 09:16:05 INFO mapreduce.Job: Job job_1462155939310_0004 failed with 
> state FAILED due to: Application application_1462155939310_0004 failed 2 
> times due to AM Container for appattempt_1462155939310_0004_02 exited 
> with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1462155939310_0004_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:946)
> at org.apache.hadoop.util.Shell.run(Shell.java:850)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1144)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:385)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:281)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:89)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> Error: Could not find or load main class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351916#comment-15351916
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-5221:
---

The patches until v11 should be fixed to work with both old API and new API. I 
will fix it soon.

> Reduce side Combiner is not used when using the new API
> ---
>
> Key: MAPREDUCE-5221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.4-alpha
>Reporter: Siddharth Seth
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.10.patch, 
> MAPREDUCE-5221.11.patch, MAPREDUCE-5221.2.patch, MAPREDUCE-5221.3.patch, 
> MAPREDUCE-5221.4.patch, MAPREDUCE-5221.5.patch, MAPREDUCE-5221.6.patch, 
> MAPREDUCE-5221.7-2.patch, MAPREDUCE-5221.7.patch, MAPREDUCE-5221.8.patch, 
> MAPREDUCE-5221.9.patch
>
>
> If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
> will silently ignored on the reduce side since the reduce side usage is only 
> aware of the old api combiner.
> This doesn't fail the job - since the new combiner key does not deprecate the 
> old key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-5221:
--
Status: Open  (was: Patch Available)

> Reduce side Combiner is not used when using the new API
> ---
>
> Key: MAPREDUCE-5221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.4-alpha
>Reporter: Siddharth Seth
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.10.patch, 
> MAPREDUCE-5221.11.patch, MAPREDUCE-5221.2.patch, MAPREDUCE-5221.3.patch, 
> MAPREDUCE-5221.4.patch, MAPREDUCE-5221.5.patch, MAPREDUCE-5221.6.patch, 
> MAPREDUCE-5221.7-2.patch, MAPREDUCE-5221.7.patch, MAPREDUCE-5221.8.patch, 
> MAPREDUCE-5221.9.patch
>
>
> If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
> will silently ignored on the reduce side since the reduce side usage is only 
> aware of the old api combiner.
> This doesn't fail the job - since the new combiner key does not deprecate the 
> old key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6441) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351807#comment-15351807
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6441:
---

[~rchiang] thanks for your rebasing. the fix itself looks good to me. Could you 
add a test case to reproduce this issue with access from multi threads?

> LocalDistributedCacheManager for concurrent sqoop processes fails to create 
> unique directories
> --
>
> Key: MAPREDUCE-6441
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6441
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: William Watson
>Assignee: Ray Chiang
> Attachments: HADOOP-10924.02.patch, 
> HADOOP-10924.03.jobid-plus-uuid.patch, MAPREDUCE-6441.004.patch
>
>
> Kicking off many sqoop processes in different threads results in:
> {code}
> 2014-08-01 13:47:24 -0400:  INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: 
> Encountered IOException running import job: java.io.IOException: 
> java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot 
> overwrite non empty destination directory 
> /tmp/hadoop-hadoop/mapred/local/1406915233073
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> java.security.AccessController.doPrivileged(Native Method)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> javax.security.auth.Subject.doAs(Subject.java:415)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.run(Sqoop.java:145)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.main(Sqoop.java:238)
> {code}
> If two are kicked off in the same second. The issue is the following lines of 
> code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: 
> {code}
> // Generating unique numbers for FSDownload.
> AtomicLong uniqueNumberGenerator =
>new AtomicLong(System.currentTimeMillis());
> {code}
> and 
> {code}
> Long.toString(uniqueNumberGenerator.incrementAndGet())),
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6441) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6441:
--
Assignee: Ray Chiang

> LocalDistributedCacheManager for concurrent sqoop processes fails to create 
> unique directories
> --
>
> Key: MAPREDUCE-6441
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6441
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: William Watson
>Assignee: Ray Chiang
> Attachments: HADOOP-10924.02.patch, 
> HADOOP-10924.03.jobid-plus-uuid.patch, MAPREDUCE-6441.004.patch
>
>
> Kicking off many sqoop processes in different threads results in:
> {code}
> 2014-08-01 13:47:24 -0400:  INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: 
> Encountered IOException running import job: java.io.IOException: 
> java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot 
> overwrite non empty destination directory 
> /tmp/hadoop-hadoop/mapred/local/1406915233073
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> java.security.AccessController.doPrivileged(Native Method)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> javax.security.auth.Subject.doAs(Subject.java:415)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.run(Sqoop.java:145)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.main(Sqoop.java:238)
> {code}
> If two are kicked off in the same second. The issue is the following lines of 
> code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: 
> {code}
> // Generating unique numbers for FSDownload.
> AtomicLong uniqueNumberGenerator =
>new AtomicLong(System.currentTimeMillis());
> {code}
> and 
> {code}
> Long.toString(uniqueNumberGenerator.incrementAndGet())),
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6704) Container fail to launch for mapred application

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350922#comment-15350922
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6704:
---

Also, after changing default value of properties, could you update 
./hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml?

> Container fail to launch for mapred application
> ---
>
> Key: MAPREDUCE-6704
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6704
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6704.patch, 0001-YARN-5026.patch
>
>
> Container fail to launch for mapred application.
> As part for launch script {{HADOOP_MAPRED_HOME}} default value is not set 
> .After 
> https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
>{{HADOOP_MAPRED_HOME}} is not able to get from {{builder.environment()}} 
> since {{DefaultContainerExecutor#buildCommandExecutor}} sets inherit to false.
> {noformat}
> 16/05/02 09:16:05 INFO mapreduce.Job: Job job_1462155939310_0004 failed with 
> state FAILED due to: Application application_1462155939310_0004 failed 2 
> times due to AM Container for appattempt_1462155939310_0004_02 exited 
> with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1462155939310_0004_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:946)
> at org.apache.hadoop.util.Shell.run(Shell.java:850)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1144)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:385)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:281)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:89)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> Error: Could not find or load main class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6704) Container fail to launch for mapred application

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350917#comment-15350917
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6704:
---

[~bibinchundatt] thanks for your contribution.

{quote}
+  "HADOOP_MAPRED_HOME=" + Apps.crossPlatformify("HADOOP_COMMON_HOME");
{quote}

Why not making {{ 
"HADOOP_MAPRED_HOME="Apps.crossPlatformify("HADOOP_MAPRED_HOME");}}? I think 
it's more straight forward and better to understand its behaviour.





> Container fail to launch for mapred application
> ---
>
> Key: MAPREDUCE-6704
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6704
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6704.patch, 0001-YARN-5026.patch
>
>
> Container fail to launch for mapred application.
> As part for launch script {{HADOOP_MAPRED_HOME}} default value is not set 
> .After 
> https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
>{{HADOOP_MAPRED_HOME}} is not able to get from {{builder.environment()}} 
> since {{DefaultContainerExecutor#buildCommandExecutor}} sets inherit to false.
> {noformat}
> 16/05/02 09:16:05 INFO mapreduce.Job: Job job_1462155939310_0004 failed with 
> state FAILED due to: Application application_1462155939310_0004 failed 2 
> times due to AM Container for appattempt_1462155939310_0004_02 exited 
> with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1462155939310_0004_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:946)
> at org.apache.hadoop.util.Shell.run(Shell.java:850)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1144)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:385)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:281)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:89)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> Error: Could not find or load main class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6704) Container fail to launch for mapred application

2016-06-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6704:
--
Target Version/s: 3.0.0-alpha1

> Container fail to launch for mapred application
> ---
>
> Key: MAPREDUCE-6704
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6704
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
> Attachments: 0001-MAPREDUCE-6704.patch, 0001-YARN-5026.patch
>
>
> Container fail to launch for mapred application.
> As part for launch script {{HADOOP_MAPRED_HOME}} default value is not set 
> .After 
> https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
>{{HADOOP_MAPRED_HOME}} is not able to get from {{builder.environment()}} 
> since {{DefaultContainerExecutor#buildCommandExecutor}} sets inherit to false.
> {noformat}
> 16/05/02 09:16:05 INFO mapreduce.Job: Job job_1462155939310_0004 failed with 
> state FAILED due to: Application application_1462155939310_0004 failed 2 
> times due to AM Container for appattempt_1462155939310_0004_02 exited 
> with  exitCode: 1
> Failing this attempt.Diagnostics: Exception from container-launch.
> Container id: container_1462155939310_0004_02_01
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:946)
> at org.apache.hadoop.util.Shell.run(Shell.java:850)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1144)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:385)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:281)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:89)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> Error: Could not find or load main class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster
> Container exited with a non-zero exit code 1. Last 4096 bytes of stderr :
> Java HotSpot(TM) 64-Bit Server VM warning: ignoring option UseSplitVerifier; 
> support was removed in 8.0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6721) mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce shuffle to disk

2016-06-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6721:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.

Thanks [~jira.shegalov] for your contribution!

> mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce 
> shuffle to disk
> 
>
> Key: MAPREDUCE-6721
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6721
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2, task
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Fix For: 2.9.0
>
> Attachments: MAPREDUCE-6721.001.patch, MAPREDUCE-6721.002.patch
>
>
> We are potentially hitting an in-memory-shuffle-related reservation 
> starvation resembling MAPREDUCE-6445. To work it around, we wanted to disable 
> in memory shuffle via mapreduce.reduce.shuffle.memory.limit.percent=0.0 that 
> turned out to be disallowed by the current logic. So we had to resort to 
> another small float value such as 0.0001. However, zero is more logical imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (MAPREDUCE-6721) mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce shuffle to disk

2016-06-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15345457#comment-15345457
 ] 

Tsuyoshi Ozawa edited comment on MAPREDUCE-6721 at 6/23/16 12:16 AM:
-

[~jira.shegalov] +1. It's a straight-forward extension of the configuration. 
Checking this into trunk and branch-2.



was (Author: ozawa):
[~jira.shegalov] +1. It's a straight-forward extension of the configuration. 
Checking this in into trunk and branch-2.


> mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce 
> shuffle to disk
> 
>
> Key: MAPREDUCE-6721
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6721
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2, task
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: MAPREDUCE-6721.001.patch, MAPREDUCE-6721.002.patch
>
>
> We are potentially hitting an in-memory-shuffle-related reservation 
> starvation resembling MAPREDUCE-6445. To work it around, we wanted to disable 
> in memory shuffle via mapreduce.reduce.shuffle.memory.limit.percent=0.0 that 
> turned out to be disallowed by the current logic. So we had to resort to 
> another small float value such as 0.0001. However, zero is more logical imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6721) mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce shuffle to disk

2016-06-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15345457#comment-15345457
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6721:
---

[~jira.shegalov] +1. It's a straight-forward extension of the configuration. 
Checking this in into trunk and branch-2.


> mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce 
> shuffle to disk
> 
>
> Key: MAPREDUCE-6721
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6721
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2, task
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: MAPREDUCE-6721.001.patch, MAPREDUCE-6721.002.patch
>
>
> We are potentially hitting an in-memory-shuffle-related reservation 
> starvation resembling MAPREDUCE-6445. To work it around, we wanted to disable 
> in memory shuffle via mapreduce.reduce.shuffle.memory.limit.percent=0.0 that 
> turned out to be disallowed by the current logic. So we had to resort to 
> another small float value such as 0.0001. However, zero is more logical imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6721) mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce shuffle to disk

2016-06-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6721:
--
Labels:   (was: incompatible)

> mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce 
> shuffle to disk
> 
>
> Key: MAPREDUCE-6721
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6721
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2, task
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: MAPREDUCE-6721.001.patch, MAPREDUCE-6721.002.patch
>
>
> We are potentially hitting an in-memory-shuffle-related reservation 
> starvation resembling MAPREDUCE-6445. To work it around, we wanted to disable 
> in memory shuffle via mapreduce.reduce.shuffle.memory.limit.percent=0.0 that 
> turned out to be disallowed by the current logic. So we had to resort to 
> another small float value such as 0.0001. However, zero is more logical imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6721) mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce shuffle to disk

2016-06-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6721:
--
Issue Type: Improvement  (was: Bug)

> mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce 
> shuffle to disk
> 
>
> Key: MAPREDUCE-6721
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6721
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2, task
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>  Labels: incompatible
> Attachments: MAPREDUCE-6721.001.patch, MAPREDUCE-6721.002.patch
>
>
> We are potentially hitting an in-memory-shuffle-related reservation 
> starvation resembling MAPREDUCE-6445. To work it around, we wanted to disable 
> in memory shuffle via mapreduce.reduce.shuffle.memory.limit.percent=0.0 that 
> turned out to be disallowed by the current logic. So we had to resort to 
> another small float value such as 0.0001. However, zero is more logical imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6721) mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce shuffle to disk

2016-06-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6721:
--
Labels: incompatible  (was: )

> mapreduce.reduce.shuffle.memory.limit.percent=0.0 should be legal to enforce 
> shuffle to disk
> 
>
> Key: MAPREDUCE-6721
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6721
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mrv2, task
>Affects Versions: 2.6.4
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>  Labels: incompatible
> Attachments: MAPREDUCE-6721.001.patch, MAPREDUCE-6721.002.patch
>
>
> We are potentially hitting an in-memory-shuffle-related reservation 
> starvation resembling MAPREDUCE-6445. To work it around, we wanted to disable 
> in memory shuffle via mapreduce.reduce.shuffle.memory.limit.percent=0.0 that 
> turned out to be disallowed by the current logic. So we had to resort to 
> another small float value such as 0.0001. However, zero is more logical imo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-03-05 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15182035#comment-15182035
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-4522:
---

[~shyam_gav] Thank you for the question. That's my mistake: Correctly: "Please 
add the configuraition, MR_DBOUTPUTFORMAT_BATCH_SIZE, to DBOutputFormat  class 
instead of MRJobConfig". 

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>Assignee: Shyam Gavulla
>  Labels: newbie
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-03-03 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179395#comment-15179395
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-4522:
---

In addition to the above comment, MR_DBOUTPUTFORMAT_BATCH_SIZE should be 
renamed as MR_DB_OUTPUT_FORMAT_BATCH_SIZE.

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>Assignee: Shyam Gavulla
>  Labels: newbie
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-03-03 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-4522:
--
Assignee: Shyam Gavulla  (was: Nathan Jarus)

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>Assignee: Shyam Gavulla
>  Labels: newbie
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-03-03 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15179088#comment-15179088
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-4522:
---

[~shyam_gav] thank you for updating. MR_DBOUTPUTFORMAT_BATCH_SIZE is 
DBInputFormat specific configuration. Please add  the configuraition to 
MR_DBOUTPUTFORMAT_BATCH_SIZE instead of MRJobConfig. After addressing the 
comment, could you attach the patch to jira?

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>  Labels: newbie
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-03-03 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-4522:
--
Assignee: Nathan Jarus

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>Assignee: Nathan Jarus
>  Labels: newbie
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-4522) DBOutputFormat Times out on large batch inserts

2016-02-28 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15171436#comment-15171436
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-4522:
---

[~shyam_gav] sounds reasonable. I'm thinking to make the batch size 
configurable. Could you tackle this issue?

> DBOutputFormat Times out on large batch inserts
> ---
>
> Key: MAPREDUCE-4522
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4522
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: task-controller
>Affects Versions: 0.20.205.0
>Reporter: Nathan Jarus
>  Labels: newbie
>
> In DBRecordWriter#close(), progress is never updated. In large batch inserts, 
> this can cause the reduce task to time out due to the amount of time it takes 
> the SQL engine to process that insert. 
> Potential solutions I can see:
> Don't batch inserts; do the insert when DBRecordWriter#write() is called 
> (awful)
> Spin up a thread in DBRecordWriter#close() and update progress in that. 
> (gross)
> I can provide code for either if you're interested. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6641) TestTaskAttempt fails in trunk

2016-02-20 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6641:
--
Attachment: 
org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt-output.txt

Attaching a log.

> TestTaskAttempt fails in trunk
> --
>
> Key: MAPREDUCE-6641
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6641
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Tsuyoshi Ozawa
> Attachments: 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt-output.txt
>
>
> {code}
> Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.917 sec 
> <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
> testMRAppHistoryForTAFailedInAssigned(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
>   Time elapsed: 12.732 sec  <<< FAILURE!
> java.lang.AssertionError: No Ta Started JH Event
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testTaskAttemptAssignedKilledHistory(TestTaskAttempt.java:388)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testMRAppHistoryForTAFailedInAssigned(TestTaskAttempt.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MAPREDUCE-6641) TestTaskAttempt fails in trunk

2016-02-20 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created MAPREDUCE-6641:
-

 Summary: TestTaskAttempt fails in trunk
 Key: MAPREDUCE-6641
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6641
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Reporter: Tsuyoshi Ozawa


{code}
Running org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.917 sec <<< 
FAILURE! - in org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt
testMRAppHistoryForTAFailedInAssigned(org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt)
  Time elapsed: 12.732 sec  <<< FAILURE!
java.lang.AssertionError: No Ta Started JH Event
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testTaskAttemptAssignedKilledHistory(TestTaskAttempt.java:388)
at 
org.apache.hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt.testMRAppHistoryForTAFailedInAssigned(TestTaskAttempt.java:177)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (MAPREDUCE-6341) Fix typo in mapreduce tutorial

2016-02-16 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa resolved MAPREDUCE-6341.
---
Resolution: Fixed

Committed addendum patch to trunk, branch-2, branch-2.8. Thanks [~jmluy] for 
your contribution and thanks [~vinodkv] for pinging.

> Fix typo in mapreduce tutorial
> --
>
> Key: MAPREDUCE-6341
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6341
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: John Michael Luy
>Assignee: John Michael Luy
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-11879.patch, MAPREDUCE-6341.patch
>
>
> There are some typos in the converted tutorial in markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6341) Fix typo in mapreduce tutorial

2016-02-16 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149876#comment-15149876
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6341:
---

+1, checking this in.

> Fix typo in mapreduce tutorial
> --
>
> Key: MAPREDUCE-6341
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6341
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: John Michael Luy
>Assignee: John Michael Luy
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-11879.patch, MAPREDUCE-6341.patch
>
>
> There are some typos in the converted tutorial in markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6341) Fix typo in mapreduce tutorial

2016-02-16 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149340#comment-15149340
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6341:
---

[~vinodkv] sure, thank you for pinging me.

> Fix typo in mapreduce tutorial
> --
>
> Key: MAPREDUCE-6341
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6341
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: John Michael Luy
>Assignee: John Michael Luy
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-11879.patch, MAPREDUCE-6341.patch
>
>
> There are some typos in the converted tutorial in markdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6607) .staging dir is not cleaned up if mapreduce.task.files.preserve.failedtask or mapreduce.task.files.preserve.filepattern are set

2016-02-14 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15146942#comment-15146942
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6607:
---

[~lewuathe] thank you for updating. How about adding tests when both 
PRESERVE_FAILED_TASK_FILES and PRESERVE_FILES_PATTERN are true?

> .staging dir is not cleaned up if mapreduce.task.files.preserve.failedtask or 
> mapreduce.task.files.preserve.filepattern are set
> ---
>
> Key: MAPREDUCE-6607
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6607
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.7.1
>Reporter: Maysam Yabandeh
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: MAPREDUCE-6607.01.patch, MAPREDUCE-6607.02.patch
>
>
> if either of the following configs are set, then .staging dir is not cleaned 
> up:
> * mapreduce.task.files.preserve.failedtask 
> * mapreduce.task.files.preserve.filepattern
> The former was supposed to keep only .staging of failed tasks and the latter 
> was supposed to be used only if that task name matches against the specified 
> regular expression.
> {code}
>   protected boolean keepJobFiles(JobConf conf) {
> return (conf.getKeepTaskFilesPattern() != null || conf
> .getKeepFailedTaskFiles());
>   }
> {code}
> {code}
>   public void cleanupStagingDir() throws IOException {
> /* make sure we clean the staging files */
> String jobTempDir = null;
> FileSystem fs = getFileSystem(getConfig());
> try {
>   if (!keepJobFiles(new JobConf(getConfig( {
> jobTempDir = getConfig().get(MRJobConfig.MAPREDUCE_JOB_DIR);
> if (jobTempDir == null) {
>   LOG.warn("Job Staging directory is null");
>   return;
> }
> Path jobTempDirPath = new Path(jobTempDir);
> LOG.info("Deleting staging directory " + 
> FileSystem.getDefaultUri(getConfig()) +
> " " + jobTempDir);
> fs.delete(jobTempDirPath, true);
>   }
> } catch(IOException io) {
>   LOG.error("Failed to cleanup staging dir " + jobTempDir, io);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6607) .staging dir is not cleaned up if mapreduce.task.files.preserve.failedtask or mapreduce.task.files.preserve.filepattern are set

2016-02-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140678#comment-15140678
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6607:
---

[~lewuathe] the patch seems to be stale, so could you rebase it on trunk?

> .staging dir is not cleaned up if mapreduce.task.files.preserve.failedtask or 
> mapreduce.task.files.preserve.filepattern are set
> ---
>
> Key: MAPREDUCE-6607
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6607
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.7.1
>Reporter: Maysam Yabandeh
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: MAPREDUCE-6607.01.patch
>
>
> if either of the following configs are set, then .staging dir is not cleaned 
> up:
> * mapreduce.task.files.preserve.failedtask 
> * mapreduce.task.files.preserve.filepattern
> The former was supposed to keep only .staging of failed tasks and the latter 
> was supposed to be used only if that task name matches against the specified 
> regular expression.
> {code}
>   protected boolean keepJobFiles(JobConf conf) {
> return (conf.getKeepTaskFilesPattern() != null || conf
> .getKeepFailedTaskFiles());
>   }
> {code}
> {code}
>   public void cleanupStagingDir() throws IOException {
> /* make sure we clean the staging files */
> String jobTempDir = null;
> FileSystem fs = getFileSystem(getConfig());
> try {
>   if (!keepJobFiles(new JobConf(getConfig( {
> jobTempDir = getConfig().get(MRJobConfig.MAPREDUCE_JOB_DIR);
> if (jobTempDir == null) {
>   LOG.warn("Job Staging directory is null");
>   return;
> }
> Path jobTempDirPath = new Path(jobTempDir);
> LOG.info("Deleting staging directory " + 
> FileSystem.getDefaultUri(getConfig()) +
> " " + jobTempDir);
> fs.delete(jobTempDirPath, true);
>   }
> } catch(IOException io) {
>   LOG.error("Failed to cleanup staging dir " + jobTempDir, io);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6607) .staging dir is not cleaned up if mapreduce.task.files.preserve.failedtask or mapreduce.task.files.preserve.filepattern are set

2016-02-09 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15139785#comment-15139785
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6607:
---

OK, I'll check this issue today.

> .staging dir is not cleaned up if mapreduce.task.files.preserve.failedtask or 
> mapreduce.task.files.preserve.filepattern are set
> ---
>
> Key: MAPREDUCE-6607
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6607
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.7.1
>Reporter: Maysam Yabandeh
>Assignee: Kai Sasaki
>Priority: Minor
> Attachments: MAPREDUCE-6607.01.patch
>
>
> if either of the following configs are set, then .staging dir is not cleaned 
> up:
> * mapreduce.task.files.preserve.failedtask 
> * mapreduce.task.files.preserve.filepattern
> The former was supposed to keep only .staging of failed tasks and the latter 
> was supposed to be used only if that task name matches against the specified 
> regular expression.
> {code}
>   protected boolean keepJobFiles(JobConf conf) {
> return (conf.getKeepTaskFilesPattern() != null || conf
> .getKeepFailedTaskFiles());
>   }
> {code}
> {code}
>   public void cleanupStagingDir() throws IOException {
> /* make sure we clean the staging files */
> String jobTempDir = null;
> FileSystem fs = getFileSystem(getConfig());
> try {
>   if (!keepJobFiles(new JobConf(getConfig( {
> jobTempDir = getConfig().get(MRJobConfig.MAPREDUCE_JOB_DIR);
> if (jobTempDir == null) {
>   LOG.warn("Job Staging directory is null");
>   return;
> }
> Path jobTempDirPath = new Path(jobTempDir);
> LOG.info("Deleting staging directory " + 
> FileSystem.getDefaultUri(getConfig()) +
> " " + jobTempDir);
> fs.delete(jobTempDirPath, true);
>   }
> } catch(IOException io) {
>   LOG.error("Failed to cleanup staging dir " + jobTempDir, io);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6555) TestMRAppMaster fails on trunk

2015-11-25 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6555:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~djp] for the contribution and [~ajisakaa] and 
[~varun_saxena] for the review and comments.

> TestMRAppMaster fails on trunk
> --
>
> Key: MAPREDUCE-6555
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6555
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Junping Du
> Fix For: 3.0.0
>
> Attachments: MAPREDUCE-6555.patch
>
>
> Observed in QA report of YARN-3840 
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> Tests run: 9, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 20.699 sec 
> <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> testMRAppMasterMidLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)  
> Time elapsed: 0.474 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterMidLock(TestMRAppMaster.java:174)
> testMRAppMasterSuccessLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)
>   Time elapsed: 0.175 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.io.FileNotFoundException: File 
> file:/home/varun/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004-1448100479292-TestAppMasterUser-%3Cmissing+job+name%3E-1448100479413-0-0-SUCCEEDED-default-1448100479292.jhist_tmp
>  does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:640)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:866)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:630)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:513)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.moveTmpToDone(JobHistoryEventHandler.java:1346)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processDoneFiles(JobHistoryEventHandler.java:1154)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1751)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1247)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterSuccessLock(TestMRAppMaster.java:254)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6555) TestMRAppMaster fails on trunk

2015-11-25 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026987#comment-15026987
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6555:
---

Jenkins CI looks to be checking another patch. +1 for MAPREDUCE-6555.patch. 
Checking this in. 

> TestMRAppMaster fails on trunk
> --
>
> Key: MAPREDUCE-6555
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6555
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Junping Du
> Attachments: MAPREDUCE-6555.patch
>
>
> Observed in QA report of YARN-3840 
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> Tests run: 9, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 20.699 sec 
> <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> testMRAppMasterMidLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)  
> Time elapsed: 0.474 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterMidLock(TestMRAppMaster.java:174)
> testMRAppMasterSuccessLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)
>   Time elapsed: 0.175 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.io.FileNotFoundException: File 
> file:/home/varun/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004-1448100479292-TestAppMasterUser-%3Cmissing+job+name%3E-1448100479413-0-0-SUCCEEDED-default-1448100479292.jhist_tmp
>  does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:640)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:866)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:630)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:513)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.moveTmpToDone(JobHistoryEventHandler.java:1346)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processDoneFiles(JobHistoryEventHandler.java:1154)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1751)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1247)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterSuccessLock(TestMRAppMaster.java:254)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6555) TestMRAppMaster fails on trunk

2015-11-25 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026977#comment-15026977
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6555:
---

[~djp] yeah. However, we have lots intermittent test failures, so I've checked 
it. +1. 

> TestMRAppMaster fails on trunk
> --
>
> Key: MAPREDUCE-6555
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6555
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Junping Du
> Attachments: MAPREDUCE-6555.patch
>
>
> Observed in QA report of YARN-3840 
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> Tests run: 9, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 20.699 sec 
> <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> testMRAppMasterMidLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)  
> Time elapsed: 0.474 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterMidLock(TestMRAppMaster.java:174)
> testMRAppMasterSuccessLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)
>   Time elapsed: 0.175 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.io.FileNotFoundException: File 
> file:/home/varun/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004-1448100479292-TestAppMasterUser-%3Cmissing+job+name%3E-1448100479413-0-0-SUCCEEDED-default-1448100479292.jhist_tmp
>  does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:640)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:866)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:630)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:513)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.moveTmpToDone(JobHistoryEventHandler.java:1346)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processDoneFiles(JobHistoryEventHandler.java:1154)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1751)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1247)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterSuccessLock(TestMRAppMaster.java:254)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6555) TestMRAppMaster fails on trunk

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026252#comment-15026252
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6555:
---

I'm checking by running the test case. Please wait a moment.

> TestMRAppMaster fails on trunk
> --
>
> Key: MAPREDUCE-6555
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6555
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Junping Du
> Attachments: MAPREDUCE-6555.patch
>
>
> Observed in QA report of YARN-3840 
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> Tests run: 9, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 20.699 sec 
> <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
> testMRAppMasterMidLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)  
> Time elapsed: 0.474 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterMidLock(TestMRAppMaster.java:174)
> testMRAppMasterSuccessLock(org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster)
>   Time elapsed: 0.175 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.io.FileNotFoundException: File 
> file:/home/varun/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004-1448100479292-TestAppMasterUser-%3Cmissing+job+name%3E-1448100479413-0-0-SUCCEEDED-default-1448100479292.jhist_tmp
>  does not exist
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:640)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:866)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:630)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
>   at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:513)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.moveTmpToDone(JobHistoryEventHandler.java:1346)
>   at 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processDoneFiles(JobHistoryEventHandler.java:1154)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1751)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1247)
>   at 
> org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterSuccessLock(TestMRAppMaster.java:254)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6558) multibyte delimiters with compressed input files generate duplicate records

2015-11-24 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6558:
--
Target Version/s: 2.8.0, 2.6.3, 2.7.3

> multibyte delimiters with compressed input files generate duplicate records
> ---
>
> Key: MAPREDUCE-6558
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6558
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv1, mrv2
>Affects Versions: 2.7.2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>
> This is the follow up for MAPREDUCE-6549. Compressed files cause record 
> duplications as shown in different junit tests. The number of duplicated 
> records changes with the splitsize:
> Unexpected number of records in split (splitsize = 10)
> Expected: 41051
> Actual: 45062
> Unexpected number of records in split (splitsize = 10)
> Expected: 41051
> Actual: 41052
> Test passes with splitsize = 147445 which is the compressed file length.The 
> file is a bzip2 file with 100k blocks and a total of 11 blocks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6555) TestMRAppMaster fails on trunk

2015-11-23 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15023789#comment-15023789
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6555:
---

The log can be seen here: 
https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2649/testReport/

About 
org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterSuccessLock:
{code}
java.io.FileNotFoundException: File 
file:/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004_conf.xml_tmp
 does not exist

Stack trace
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.io.FileNotFoundException: File 
file:/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004_conf.xml_tmp
 does not exist
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:640)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:866)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:630)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at 
org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
at 
org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:513)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.moveTmpToDone(JobHistoryEventHandler.java:1346)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processDoneFiles(JobHistoryEventHandler.java:1153)
at 
org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
at 
org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
at 
org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
at 
org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1751)
at 
org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1247)
at 
org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterSuccessLock(TestMRAppMaster.java:254)
{code}

About org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster.testMRAppMasterMidLock:
{code}
java.io.FileNotFoundException: File 
file:/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004_conf.xml_tmp
 does not exist

Stack trace
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
java.io.FileNotFoundException: File 
file:/home/jenkins/jenkins-slave/workspace/Hadoop-Mapreduce-trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/staging/history/done_intermediate/TestAppMasterUser/job_1317529182569_0004_conf.xml_tmp
 does not exist
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:640)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:866)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:630)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at 
org.apache.hadoop.fs.RawLocalFileSystem.rename(RawLocalFileSystem.java:372)
at 
org.apache.hadoop.fs.ChecksumFileSystem.rename(ChecksumFileSystem.java:513)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.moveTmpToDone(JobHistoryEventHandler.java:1346)
at 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.processDoneFiles(JobHistoryEventHandler.java:1153)
at 
org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
at 
org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
at 
org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
at 
org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1751)
at 
org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
at 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1247)
at 

[jira] [Commented] (MAPREDUCE-6526) Remove usage of metrics v1 from hadoop-mapreduce

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15000600#comment-15000600
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6526:
---

Sounds good.

> Remove usage of metrics v1 from hadoop-mapreduce
> 
>
> Key: MAPREDUCE-6526
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6526
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Blocker
>
> LocalJobRunnerMetrics and ShuffleClientMetrics are still using metrics v1. We 
> should remove these metrics or rewrite them to use metrics v2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6505) Migrate io test cases

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6505:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: MAPREDUCE-6050)

> Migrate io test cases
> -
>
> Key: MAPREDUCE-6505
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6505
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6505) Migrate io test cases

2015-11-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1564#comment-1564
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6505:
---

[~cote] Sure, please wait a moment.

> Migrate io test cases
> -
>
> Key: MAPREDUCE-6505
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6505
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6505) Migrate io test cases

2015-11-10 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1582#comment-1582
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6505:
---

[~cote] thank you for your patch. I glanced over your patch. Could you fix to 
import the static methods separately instead of using asterisk importing(about 
TestArrayFile)? 


{code}
+import static org.junit.Assert.*;
{code}

> Migrate io test cases
> -
>
> Key: MAPREDUCE-6505
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6505
> Project: Hadoop Map/Reduce
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5763) Warn message about httpshuffle in NM logs

2015-11-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996082#comment-14996082
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-5763:
---

+1, checking this in.

> Warn message about httpshuffle in NM logs
> -
>
> Key: MAPREDUCE-5763
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5763
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Sandy Ryza
>Assignee: Akira AJISAKA
> Attachments: MAPREDUCE-5763.00.patch
>
>
> {code}
> 2014-02-20 12:08:45,141 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The 
> Auxilurary Service named 'mapreduce_shuffle' in the configuration is for 
> class class org.apache.hadoop.mapred.ShuffleHandler which has a name of 
> 'httpshuffle'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2014-02-20 12:08:45,142 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Adding auxiliary service httpshuffle, "mapreduce_shuffle"
> {code}
> I'm seeing this in my NodeManager logs,  even though things work fine.  A 
> WARN is being caused by some sort of mismatch between the name of the service 
> (in terms of org.apache.hadoop.service.Service.getName()) and the name of the 
> auxiliary service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5763) Warn message about httpshuffle in NM logs

2015-11-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-5763:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~ajisakaa] for the contribution.

I think that it can cause a bit surprise for users to change a log format in 
branch-2.7(especially for sysadmins). What do you think?

> Warn message about httpshuffle in NM logs
> -
>
> Key: MAPREDUCE-5763
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5763
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Sandy Ryza
>Assignee: Akira AJISAKA
> Fix For: 3.0.0, 2.8.0
>
> Attachments: MAPREDUCE-5763.00.patch
>
>
> {code}
> 2014-02-20 12:08:45,141 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The 
> Auxilurary Service named 'mapreduce_shuffle' in the configuration is for 
> class class org.apache.hadoop.mapred.ShuffleHandler which has a name of 
> 'httpshuffle'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2014-02-20 12:08:45,142 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Adding auxiliary service httpshuffle, "mapreduce_shuffle"
> {code}
> I'm seeing this in my NodeManager logs,  even though things work fine.  A 
> WARN is being caused by some sort of mismatch between the name of the service 
> (in terms of org.apache.hadoop.service.Service.getName()) and the name of the 
> auxiliary service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5763) Warn message about httpshuffle in NM logs

2015-11-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-5763:
--
Target Version/s: 3.0.0, 2.8.0  (was: 2.8.0, 2.7.3)

> Warn message about httpshuffle in NM logs
> -
>
> Key: MAPREDUCE-5763
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5763
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Sandy Ryza
>Assignee: Akira AJISAKA
> Fix For: 3.0.0, 2.8.0
>
> Attachments: MAPREDUCE-5763.00.patch
>
>
> {code}
> 2014-02-20 12:08:45,141 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The 
> Auxilurary Service named 'mapreduce_shuffle' in the configuration is for 
> class class org.apache.hadoop.mapred.ShuffleHandler which has a name of 
> 'httpshuffle'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2014-02-20 12:08:45,142 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Adding auxiliary service httpshuffle, "mapreduce_shuffle"
> {code}
> I'm seeing this in my NodeManager logs,  even though things work fine.  A 
> WARN is being caused by some sort of mismatch between the name of the service 
> (in terms of org.apache.hadoop.service.Service.getName()) and the name of the 
> auxiliary service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5763) Warn message about httpshuffle in NM logs

2015-11-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996128#comment-14996128
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-5763:
---

Thank Akira for your reply. OK, removing 2.7.3 from target version.

> Warn message about httpshuffle in NM logs
> -
>
> Key: MAPREDUCE-5763
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5763
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Sandy Ryza
>Assignee: Akira AJISAKA
> Fix For: 3.0.0, 2.8.0
>
> Attachments: MAPREDUCE-5763.00.patch
>
>
> {code}
> 2014-02-20 12:08:45,141 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The 
> Auxilurary Service named 'mapreduce_shuffle' in the configuration is for 
> class class org.apache.hadoop.mapred.ShuffleHandler which has a name of 
> 'httpshuffle'. Because these are not the same tools trying to send 
> ServiceData and read Service Meta Data may have issues unless the refer to 
> the name in the config.
> 2014-02-20 12:08:45,142 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: 
> Adding auxiliary service httpshuffle, "mapreduce_shuffle"
> {code}
> I'm seeing this in my NodeManager logs,  even though things work fine.  A 
> WARN is being caused by some sort of mismatch between the name of the service 
> (in terms of org.apache.hadoop.service.Service.getName()) and the name of the 
> auxiliary service.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5889) Deprecate FileInputFormat.setInputPaths(Job, String) and FileInputFormat.addInputPaths(Job, String)

2015-11-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984414#comment-14984414
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-5889:
---

Kicking Jenkins CI.

> Deprecate FileInputFormat.setInputPaths(Job, String) and 
> FileInputFormat.addInputPaths(Job, String)
> ---
>
> Key: MAPREDUCE-5889
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5889
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Attachments: MAPREDUCE-5889.3.patch, MAPREDUCE-5889.patch, 
> MAPREDUCE-5889.patch
>
>
> {{FileInputFormat.setInputPaths(Job job, String commaSeparatedPaths)}} and 
> {{FileInputFormat.addInputPaths(Job job, String commaSeparatedPaths)}} fail 
> to parse commaSeparatedPaths if a comma is included in the file path. (e.g. 
> Path: {{/path/file,with,comma}})
> We should deprecate these methods and document to use {{setInputPaths(Job 
> job, Path... inputPaths)}} and {{addInputPaths(Job job, Path... inputPaths)}} 
> instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6526) Remove usage of metrics v1 from hadoop-mapreduce

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978349#comment-14978349
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6526:
---

Do you think LocalJobRunnerMetrics and ShuffleClientMetrics are not useful 
Metrics? IMHO, they are still useful to analyse performance. I prefer to 
rewrite them to use Metrics v2. Compatibility itself is second priority. 
However, I think third party tools use the values of counters. I also prefer to 
use same counter name at least.

> Remove usage of metrics v1 from hadoop-mapreduce
> 
>
> Key: MAPREDUCE-6526
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6526
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Blocker
>
> LocalJobRunnerMetrics and ShuffleClientMetrics are still using metrics v1. We 
> should remove these metrics or rewrite them to use metrics v2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6525) Fix test failure of TestMiniMRClientCluster.testRestart

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14977991#comment-14977991
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6525:
---

This is not related to this issue directly, but we should use Configuration to 
enable timeline server instead of using enableAHS.
{code}
  public MiniMRYarnCluster(String testName, int noOfNMs, boolean enableAHS) {
super(testName, 1, noOfNMs, 4, 4, enableAHS);
  }
{code}


> Fix test failure of TestMiniMRClientCluster.testRestart
> ---
>
> Key: MAPREDUCE-6525
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6525
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: MAPREDUCE-6525.001.patch
>
>
> MiniMRYarnClusterAdapter#restart creates new MiniMRYarnCluster with 
> configuration of existing MiniMRYarnCluster but the address of HistoryServer 
> is properly set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6050) Upgrade JUnit3 TestCase to JUnit 4

2015-10-05 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943117#comment-14943117
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6050:
---

[~cote] thank you for starting this jira. 

{quote}
I'd propose breaking this into at least 10 subtasks since a quick count of the 
number of files to modify is about 168 on trunk currently.
{quote}

It helps committers to review your patch easily. Looking forward to your 
contribution.

> Upgrade JUnit3 TestCase to JUnit 4
> --
>
> Key: MAPREDUCE-6050
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6050
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Chen He
>Assignee: Yanjun Wang
>Priority: Trivial
>  Labels: newbie
> Attachments: MAPREDUCE-6050-1.patch
>
>
> There are still test classes that extend from junit.framework.TestCase. 
> upgrade them to JUnit4.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6468) Consistent log severity level guards and statements in MapReduce project

2015-09-09 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736364#comment-14736364
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6468:
---

[~jagadesh.kiran] Could you double check that the patch you uploaded is 
correct? It seems old one.

{code}
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/BootstrapStandby.java
@@ -374,9 +374,7 @@ private boolean checkLogsAvailableForRead(FSImage image, 
long imageTxId,
   "or call saveNamespace on the active node.\n" +
   "Error: " + e.getLocalizedMessage();
   if (LOG.isDebugEnabled()) {
-LOG.fatal(msg, e);
-  } else {
-LOG.fatal(msg);
+LOG.debug("", e);
   }
   return false;
 }
{code}

Also, in BackupImage, the log level looks wrong. Please fix it.
{code}
 if (LOG.isDebugEnabled()) {
-  LOG.debug("State transition " + bnState + " -> " + newState);
+  LOG.trace("State transition " + bnState + " -> " + newState);
 }
{code}

> Consistent log severity level guards and statements in MapReduce project
> 
>
> Key: MAPREDUCE-6468
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jackie Chang
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, 
> MAPREDUCE-6468-01.patch, MAPREDUCE-6468-02.patch, MAPREDUCE-6468-03.patch
>
>
> Developers use logs to do in-house debugging. These log statements are later 
> demoted to less severe levels and usually are guarded by their matching 
> severity levels. However, we do see inconsistencies in trunk. A log statement 
> like 
> {code}
>if (LOG.isDebugEnabled()) {
> LOG.info("Assigned container (" + allocated + ") "
> {code}
> doesn't make much sense because the log message is actually only printed out 
> in DEBUG-level. We do see previous issues tried to correct this 
> inconsistency. I am proposing a comprehensive correction over trunk.
> Doug Cutting pointed it out in HADOOP-312: 
> https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
> HDFS-1611 also corrected this inconsistency.
> This could have been avoided by switching from log4j to slf4j's {} format 
> like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
> code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6468) Consistent log severity level guards and statements in MapReduce project

2015-09-09 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14736315#comment-14736315
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6468:
---

[~jagadesh.kiran] thank you for updating. In BootstrapStandby, the message 
should be used instead of passing Exception object directly since it helps 
users to understand what's happened.
{code}
@@ -374,9 +374,7 @@ private boolean checkLogsAvailableForRead(FSImage image, 
long imageTxId,
   "or call saveNamespace on the active node.\n" +
   "Error: " + e.getLocalizedMessage();
   if (LOG.isDebugEnabled()) {
-LOG.fatal(msg, e);
-  } else {
-LOG.fatal(msg);
+LOG.debug("", e);
   }
{code}

Also, the log looks fatal one: we shouldn't remove it. Could you update it as 
follows?:
{code}
   if (LOG.isDebugEnabled()) {
LOG.log(msg, e); // this line should be fixed as LOG.debug
  } else {
LOG.fatal(msg);  // this line and else statement should remain here.
   }
{code}



> Consistent log severity level guards and statements in MapReduce project
> 
>
> Key: MAPREDUCE-6468
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jackie Chang
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, 
> MAPREDUCE-6468-01.patch, MAPREDUCE-6468-02.patch
>
>
> Developers use logs to do in-house debugging. These log statements are later 
> demoted to less severe levels and usually are guarded by their matching 
> severity levels. However, we do see inconsistencies in trunk. A log statement 
> like 
> {code}
>if (LOG.isDebugEnabled()) {
> LOG.info("Assigned container (" + allocated + ") "
> {code}
> doesn't make much sense because the log message is actually only printed out 
> in DEBUG-level. We do see previous issues tried to correct this 
> inconsistency. I am proposing a comprehensive correction over trunk.
> Doug Cutting pointed it out in HADOOP-312: 
> https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
> HDFS-1611 also corrected this inconsistency.
> This could have been avoided by switching from log4j to slf4j's {} format 
> like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
> code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6468) Consistent log severity level guards and statements

2015-09-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734688#comment-14734688
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6468:
---

+1, checking this in.

> Consistent log severity level guards and statements 
> 
>
> Key: MAPREDUCE-6468
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jackie Chang
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, 
> MAPREDUCE-6468-01.patch
>
>
> Developers use logs to do in-house debugging. These log statements are later 
> demoted to less severe levels and usually are guarded by their matching 
> severity levels. However, we do see inconsistencies in trunk. A log statement 
> like 
> {code}
>if (LOG.isDebugEnabled()) {
> LOG.info("Assigned container (" + allocated + ") "
> {code}
> doesn't make much sense because the log message is actually only printed out 
> in DEBUG-level. We do see previous issues tried to correct this 
> inconsistency. I am proposing a comprehensive correction over trunk.
> Doug Cutting pointed it out in HADOOP-312: 
> https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
> HDFS-1611 also corrected this inconsistency.
> This could have been avoided by switching from log4j to slf4j's {} format 
> like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
> code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6468) Consistent log severity level guards and statements in RMContainerAllocator

2015-09-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14734697#comment-14734697
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6468:
---

[~jagadesh.kiran] Before checking this in, I've checked initial patch by 
Jackie. We should fix the inconsistency of log levels here in other files also.

1. RMContainerRequestor
{code}
if (LOG.isDebugEnabled()) {
  LOG.info("AFTER decResourceRequest:" + " applicationId="
  + applicationId.getId() + " priority=" + priority.getPriority()
  + " resourceName=" + resourceName + " numContainers="
  + remoteRequest.getNumContainers() + " #asks=" + ask.size());
}
{code}
2. LeafQueue
{code}
if (LOG.isDebugEnabled()) {
  LOG.info(getQueueName() + 
  " user=" + userName + 
  " used=" + queueUsage.getUsed() + " numContainers=" + numContainers +
  " headroom = " + application.getHeadroom() +
  " user-resources=" + user.getUsed()
  );
}
{code}
3. ContainerTokenSelector
{code}
for (Token token : tokens) {
  if (LOG.isDebugEnabled()) {
LOG.info("Looking for service: " + service + ". Current token is "
+ token);
  }
  if (ContainerTokenIdentifier.KIND.equals(token.getKind()) && 
  service.equals(token.getService())) {
return (Token) token;
  }
}
{code}
and so on. Could you check his patch( 
https://issues.apache.org/jira/secure/attachment/12605179/HADOOP-9995.patch ) 
and update them also?

> Consistent log severity level guards and statements in RMContainerAllocator
> ---
>
> Key: MAPREDUCE-6468
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jackie Chang
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, 
> MAPREDUCE-6468-01.patch
>
>
> Developers use logs to do in-house debugging. These log statements are later 
> demoted to less severe levels and usually are guarded by their matching 
> severity levels. However, we do see inconsistencies in trunk. A log statement 
> like 
> {code}
>if (LOG.isDebugEnabled()) {
> LOG.info("Assigned container (" + allocated + ") "
> {code}
> doesn't make much sense because the log message is actually only printed out 
> in DEBUG-level. We do see previous issues tried to correct this 
> inconsistency. I am proposing a comprehensive correction over trunk.
> Doug Cutting pointed it out in HADOOP-312: 
> https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
> HDFS-1611 also corrected this inconsistency.
> This could have been avoided by switching from log4j to slf4j's {} format 
> like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
> code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6468) Consistent log severity level guards and statements in RMContainerAllocator

2015-09-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6468:
--
Summary: Consistent log severity level guards and statements in 
RMContainerAllocator  (was: Consistent log severity level guards and statements 
)

> Consistent log severity level guards and statements in RMContainerAllocator
> ---
>
> Key: MAPREDUCE-6468
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jackie Chang
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, 
> MAPREDUCE-6468-01.patch
>
>
> Developers use logs to do in-house debugging. These log statements are later 
> demoted to less severe levels and usually are guarded by their matching 
> severity levels. However, we do see inconsistencies in trunk. A log statement 
> like 
> {code}
>if (LOG.isDebugEnabled()) {
> LOG.info("Assigned container (" + allocated + ") "
> {code}
> doesn't make much sense because the log message is actually only printed out 
> in DEBUG-level. We do see previous issues tried to correct this 
> inconsistency. I am proposing a comprehensive correction over trunk.
> Doug Cutting pointed it out in HADOOP-312: 
> https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
> HDFS-1611 also corrected this inconsistency.
> This could have been avoided by switching from log4j to slf4j's {} format 
> like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
> code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6468) Consistent log severity level guards and statements in MapReduce project

2015-09-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6468:
--
Summary: Consistent log severity level guards and statements in MapReduce 
project  (was: Consistent log severity level guards and statements in 
RMContainerAllocator)

> Consistent log severity level guards and statements in MapReduce project
> 
>
> Key: MAPREDUCE-6468
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6468
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Jackie Chang
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9995-00.patch, HADOOP-9995.patch, 
> MAPREDUCE-6468-01.patch
>
>
> Developers use logs to do in-house debugging. These log statements are later 
> demoted to less severe levels and usually are guarded by their matching 
> severity levels. However, we do see inconsistencies in trunk. A log statement 
> like 
> {code}
>if (LOG.isDebugEnabled()) {
> LOG.info("Assigned container (" + allocated + ") "
> {code}
> doesn't make much sense because the log message is actually only printed out 
> in DEBUG-level. We do see previous issues tried to correct this 
> inconsistency. I am proposing a comprehensive correction over trunk.
> Doug Cutting pointed it out in HADOOP-312: 
> https://issues.apache.org/jira/browse/HADOOP-312?focusedCommentId=12429498=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12429498
> HDFS-1611 also corrected this inconsistency.
> This could have been avoided by switching from log4j to slf4j's {} format 
> like CASSANDRA-625 (2010/3) and ZOOKEEPER-850 (2012/1), which gives cleaner 
> code and slightly higher performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6442) Stack trace missing for client protocol provider creation error

2015-09-04 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14731710#comment-14731710
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6442:
---

[~lichangleo] Oh, I see. I confirmed that it works well too. +1, checking this 
in.

> Stack trace missing for client protocol provider creation error
> ---
>
> Key: MAPREDUCE-6442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: client
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: MAPREDUCE-6442.2.patch, MAPREDUCE-6442.patch
>
>
> when provider creation fail dump the stack trace rather than just print out 
> the message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6442) Stack trace is missing when error occurs in client protocol provider's constructor

2015-09-04 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6442:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.7.2
Target Version/s: 2.8.0, 2.7.2  (was: 2.8.0)
  Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, branch-2.7. Thanks [~lichangleo] for your 
contribution and reporting and thanks [~jlowe] for your review.

> Stack trace is missing when error occurs in client protocol provider's 
> constructor
> --
>
> Key: MAPREDUCE-6442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: client
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: MAPREDUCE-6442.2.patch, MAPREDUCE-6442.patch
>
>
> when provider creation fail dump the stack trace rather than just print out 
> the message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6442) Stack trace is missing when error occurs in client protocol provider's constructor

2015-09-04 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6442:
--
Summary: Stack trace is missing when error occurs in client protocol 
provider's constructor  (was: Stack trace missing for client protocol provider 
creation error)

> Stack trace is missing when error occurs in client protocol provider's 
> constructor
> --
>
> Key: MAPREDUCE-6442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: client
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: MAPREDUCE-6442.2.patch, MAPREDUCE-6442.patch
>
>
> when provider creation fail dump the stack trace rather than just print out 
> the message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6442) Stack trace missing for client protocol provider creation error

2015-09-03 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14728997#comment-14728997
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6442:
---

[~lichangleo] Thanks for your great work. How about adding a test for for 
checking the error message? Its test should be added as 
hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapred/TestCluster.java.

> Stack trace missing for client protocol provider creation error
> ---
>
> Key: MAPREDUCE-6442
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6442
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: client
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: MAPREDUCE-6442.2.patch, MAPREDUCE-6442.patch
>
>
> when provider creation fail dump the stack trace rather than just print out 
> the message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2015-09-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-5221:
--
Attachment: MAPREDUCE-5221.11.patch

Sorry for the delay. Refreshing a patch for trunk.

> Reduce side Combiner is not used when using the new API
> ---
>
> Key: MAPREDUCE-5221
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.0.4-alpha
>Reporter: Siddharth Seth
>Assignee: Tsuyoshi Ozawa
>  Labels: BB2015-05-TBR
> Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.10.patch, 
> MAPREDUCE-5221.11.patch, MAPREDUCE-5221.2.patch, MAPREDUCE-5221.3.patch, 
> MAPREDUCE-5221.4.patch, MAPREDUCE-5221.5.patch, MAPREDUCE-5221.6.patch, 
> MAPREDUCE-5221.7-2.patch, MAPREDUCE-5221.7.patch, MAPREDUCE-5221.8.patch, 
> MAPREDUCE-5221.9.patch
>
>
> If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
> will silently ignored on the reduce side since the reduce side usage is only 
> aware of the old api combiner.
> This doesn't fail the job - since the new combiner key does not deprecate the 
> old key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6447) reduce shuffle throws java.lang.OutOfMemoryError: Java heap space

2015-08-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14693058#comment-14693058
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6447:
---

Hi guys, thank you for reporting this issue. Do you mean that should we fix the 
default value to avoid the exception on this jira?

 reduce shuffle throws java.lang.OutOfMemoryError: Java heap space
 ---

 Key: MAPREDUCE-6447
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6447
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.5.0, 2.6.0, 2.5.1, 2.7.1
Reporter: shuzhangyao
Assignee: shuzhangyao
Priority: Minor

 2015-08-11 14:03:54,550 WARN [main] org.apache.hadoop.mapred.YarnChild: 
 Exception running child : 
 org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
 shuffle in fetcher#10
   at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
 Caused by: java.lang.OutOfMemoryError: Java heap space
   at 
 org.apache.hadoop.io.BoundedByteArrayOutputStream.init(BoundedByteArrayOutputStream.java:56)
   at 
 org.apache.hadoop.io.BoundedByteArrayOutputStream.init(BoundedByteArrayOutputStream.java:46)
   at 
 org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.init(InMemoryMapOutput.java:63)
   at 
 org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.unconditionalReserve(MergeManagerImpl.java:303)
   at 
 org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl.reserve(MergeManagerImpl.java:293)
   at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:511)
   at 
 org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:329)
   at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6038) A boolean may be set error in the Word Count v2.0 in MapReduce Tutorial

2015-07-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14618455#comment-14618455
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6038:
---

Thanks for the commit, Chris.

 A boolean may be set error in the Word Count v2.0 in MapReduce Tutorial
 ---

 Key: MAPREDUCE-6038
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6038
 Project: Hadoop Map/Reduce
  Issue Type: Bug
 Environment: java version 1.8.0_11 hostspot 64-bit
Reporter: Pei Ma
Assignee: Tsuyoshi Ozawa
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6038.1.patch, MAPREDUCE-6038.2.patch


 As a beginner, when I learned about the basic of the mr, I found that I 
 cound't run the WordCount2 using the command bin/hadoop jar wc.jar 
 WordCount2 /user/joe/wordcount/input /user/joe/wordcount/output in the 
 Tutorial. The VM throwed the NullPoniterException at the line 47. In the line 
 45, the returned default value of conf.getBoolean is true. That is to say  
 when wordcount.skip.patterns is not set ,the WordCount2 will continue to 
 execute getCacheFiles.. Then patternsURIs gets the null value. When the 
 -skip option dosen't exist,  wordcount.skip.patterns will not be set. 
 Then a NullPointerException come out.
 At all, the block after the if-statement in line no. 45 shoudn't be executed 
 when the -skip option dosen't exist in command. Maybe the line 45 should 
 like that  if (conf.getBoolean(wordcount.skip.patterns, false)) { 
 .Just change the boolean.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-5221) Reduce side Combiner is not used when using the new API

2015-07-02 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14612862#comment-14612862
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-5221:
---

[~davelatham] thank you for pinging. I'm refreshing a patch soon.

 Reduce side Combiner is not used when using the new API
 ---

 Key: MAPREDUCE-5221
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5221
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Affects Versions: 2.0.4-alpha
Reporter: Siddharth Seth
Assignee: Tsuyoshi Ozawa
  Labels: BB2015-05-TBR
 Attachments: MAPREDUCE-5221.1.patch, MAPREDUCE-5221.10.patch, 
 MAPREDUCE-5221.2.patch, MAPREDUCE-5221.3.patch, MAPREDUCE-5221.4.patch, 
 MAPREDUCE-5221.5.patch, MAPREDUCE-5221.6.patch, MAPREDUCE-5221.7-2.patch, 
 MAPREDUCE-5221.7.patch, MAPREDUCE-5221.8.patch, MAPREDUCE-5221.9.patch


 If a combiner is specified using o.a.h.mapreduce.Job.setCombinerClass - this 
 will silently ignored on the reduce side since the reduce side usage is only 
 aware of the old api combiner.
 This doesn't fail the job - since the new combiner key does not deprecate the 
 old key.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6400) TestReduceFetchFromPartialMem fails

2015-06-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6400:
--
Component/s: test

 TestReduceFetchFromPartialMem fails
 ---

 Key: MAPREDUCE-6400
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6400
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Akira AJISAKA

 TestReduceFetchFromPartialMem fails.
 {noformat}
 Running org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 132.978 sec 
  FAILURE! - in org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
 testReduceFromPartialMem(org.apache.hadoop.mapred.TestReduceFetchFromPartialMem)
   Time elapsed: 69.214 sec   ERROR!
 java.io.IOException: Job failed!
   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:867)
   at 
 org.apache.hadoop.mapred.TestReduceFetchFromPartialMem.runJob(TestReduceFetchFromPartialMem.java:300)
   at 
 org.apache.hadoop.mapred.TestReduceFetchFromPartialMem.testReduceFromPartialMem(TestReduceFetchFromPartialMem.java:93)
 Results :
 Tests in error: 
   TestReduceFetchFromPartialMem.testReduceFromPartialMem:93-runJob:300 » IO 
 Job...
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6388) Remove deprecation warnings from JobHistoryServer classes

2015-06-08 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14577933#comment-14577933
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6388:
---

+1, committing this shortly.

 Remove deprecation warnings from JobHistoryServer classes
 -

 Key: MAPREDUCE-6388
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6388
 Project: Hadoop Map/Reduce
  Issue Type: Task
  Components: jobhistoryserver
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: supportability
 Attachments: MAPREDUCE-6388.001.patch


 There are a ton of deprecation warnings in the JobHistoryServer classes.  
 This is affecting some modifications I'm making since a single line move 
 shifts all the deprecation warnings.  I'd like to get these fixed to prevent 
 minor changes from generating a ton of warnings in test-patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6388) Remove deprecation warnings from JobHistoryServer classes

2015-06-08 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6388:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~rchiang] for your contribution!

 Remove deprecation warnings from JobHistoryServer classes
 -

 Key: MAPREDUCE-6388
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6388
 Project: Hadoop Map/Reduce
  Issue Type: Task
  Components: jobhistoryserver
Affects Versions: 2.7.0
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor
  Labels: supportability
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6388.001.patch


 There are a ton of deprecation warnings in the JobHistoryServer classes.  
 This is affecting some modifications I'm making since a single line move 
 shifts all the deprecation warnings.  I'd like to get these fixed to prevent 
 minor changes from generating a ton of warnings in test-patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558813#comment-14558813
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6204:
---

[~jira.shegalov] Yes, I agree with you. Let me claryfy - I commited this change 
not because of avoiding environment problem, but because of updating to use new 
property instead of deprecated one. Does this make sense? If this desicion is 
wrong, I have option to revert this change. Please ping me in the case.

 TestJobCounters should use new properties instead of 
 JobConf.MAPRED_TASK_JAVA_OPTS
 --

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14558995#comment-14558995
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6204:
---

OK, thank you :-) 

 TestJobCounters should use new properties instead of 
 JobConf.MAPRED_TASK_JAVA_OPTS
 --

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14559201#comment-14559201
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6364:
---

+1, checking this in.

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch, MAPREDUCE-6364.3.patch


 Add a Kill link to Task Attempts page. Call in the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6364:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~ryu_kobayashi] for your 
contribution.

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch, MAPREDUCE-6364.3.patch


 Add a Kill link to Task Attempts page. Call in the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6364:
--
Component/s: applicationmaster

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: applicationmaster
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch, MAPREDUCE-6364.3.patch


 Add a Kill link to Task Attempts page. Call in the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6364:
--
Issue Type: New Feature  (was: Improvement)

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: applicationmaster
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch, MAPREDUCE-6364.3.patch


 Add a Kill link to Task Attempts page. Call in the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6364:
--
Description: Add a Kill link to Task Attempts page, calling REST API by 
pushing the link.  (was: Add a Kill link to Task Attempts page. Call in the 
REST API.)

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: New Feature
  Components: applicationmaster
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch, MAPREDUCE-6364.3.patch


 Add a Kill link to Task Attempts page, calling REST API by pushing the link.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6204) TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6204:
--
Summary: TestJobCounters should use new properties instead of 
JobConf.MAPRED_TASK_JAVA_OPTS  (was: TestJobCounters should use new properties 
instead JobConf.MAPRED_TASK_JAVA_OPTS)

 TestJobCounters should use new properties instead of 
 JobConf.MAPRED_TASK_JAVA_OPTS
 --

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555899#comment-14555899
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6204:
---

Before committing, I'd like to run tests. Kicking Jenkins.

 TestJobCounters should use new properties instead of 
 JobConf.MAPRED_TASK_JAVA_OPTS
 --

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6204) TestJobCounters should use new properties instead of JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6204:
--
  Resolution: Fixed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~sam liu] for your contribution 
and thanks [~jira.shegalov] for your review!

 TestJobCounters should use new properties instead of 
 JobConf.MAPRED_TASK_JAVA_OPTS
 --

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555874#comment-14555874
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6204:
---

+1, committing this shortly.

 TestJobCounters should use new properties instead 
 JobConf.MAPRED_TASK_JAVA_OPTS
 ---

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6298) Job#toString throws an exception when not in state RUNNING

2015-05-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14550067#comment-14550067
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6298:
---

{quote}
1. Usage of StringBuffer is discouraged as it's slower than StringBuilder.
{quote}

I thought it's better use StringBuilder rather than using + concatenation for 
this case. What do you think? 
http://blog.eyallupu.com/2010/09/under-hood-of-java-strings.html

{quote}
Do you want me to add another one similar to the existing one for RUNNING?
{quote}

Yes, you're right. I think we need to add the test for the backward 
compatibility.


 Job#toString throws an exception when not in state RUNNING
 --

 Key: MAPREDUCE-6298
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6298
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Lars Francke
Assignee: Lars Francke
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: MAPREDUCE-6298.1.patch, MAPREDUCE-6298.2.patch, 
 MAPREDUCE-6298.3.patch


 Job#toString calls {{ensureState(JobState.RUNNING);}} as the very first 
 thing. That method causes an Exception to be thrown which is not nice.
 One thing this breaks is usage of Job on the Scala (e.g. Spark) REPL as that 
 calls toString after every invocation and that fails every time.
 I'll attach a patch that checks state and if it's RUNNING prints the original 
 message and if not prints something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6298) Job#toString throws an exception when not in state RUNNING

2015-05-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14550025#comment-14550025
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6298:
---

[~lars_francke], thank you for your contribution. The design of the fix looks 
good. Could you address following comments?

1. Why do we need to change following parts? It's fine to use sb.append here. 
Also, we can use lineSep instead of append(\n), but I think we should do this 
on another JIRA.
{code}
-StringBuffer sb = new StringBuffer();
-sb.append(Job: ).append(status.getJobID()).append(\n);
-sb.append(Job File: ).append(status.getJobFile()).append(\n);
-sb.append(Job Tracking URL : ).append(status.getTrackingUrl());
-sb.append(\n);
-sb.append(Uber job : ).append(status.isUber()).append(\n);
-sb.append(Number of maps: ).append(numMaps).append(\n);
-sb.append(Number of reduces: ).append(numReduces).append(\n);
-sb.append(map() completion: );
-sb.append(status.getMapProgress()).append(\n);
-sb.append(reduce() completion: );
-sb.append(status.getReduceProgress()).append(\n);
-sb.append(Job state: );
-sb.append(status.getState()).append(\n);
-sb.append(retired: ).append(status.isRetired()).append(\n);
-sb.append(reason for failure: ).append(reasonforFailure);
-return sb.toString();
{code}

2. About TestJob#testJobToString, could you add a test to preserve the result 
of Job#toString() when the state of job is RUNNING?

 Job#toString throws an exception when not in state RUNNING
 --

 Key: MAPREDUCE-6298
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6298
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Lars Francke
Assignee: Lars Francke
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: MAPREDUCE-6298.1.patch, MAPREDUCE-6298.2.patch, 
 MAPREDUCE-6298.3.patch


 Job#toString calls {{ensureState(JobState.RUNNING);}} as the very first 
 thing. That method causes an Exception to be thrown which is not nice.
 One thing this breaks is usage of Job on the Scala (e.g. Spark) REPL as that 
 calls toString after every invocation and that fails every time.
 I'll attach a patch that checks state and if it's RUNNING prints the original 
 message and if not prints something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548155#comment-14548155
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6204:
---

[~sam liu], thank you for pinging us. Let me clarify one point: the 
configurations as follows looks set by your configurations as [~jira.shegalov] 
mentioned. Is it right?

{code}
-Xmx1000m -Xms1000m -Xmn100m -Xtune:virtualized 
-Xshareclasses:name=mrscc_%g,groupAccess,cacheDir=/var/hadoop/tmp,nonFatal 
-Xscmx20m -Xdump:java:file=/var/hadoop/tmp/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt 
-Xdump:heap:file=/var/hadoop/tmp/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd
{code}

 TestJobCounters should use new properties instead 
 JobConf.MAPRED_TASK_JAVA_OPTS
 ---

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead JobConf.MAPRED_TASK_JAVA_OPTS

2015-05-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549762#comment-14549762
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6204:
---

OK. I agree with the fix for using newer properties instead of using deprecated 
one.

[~jira.shegalov] As you mentioned, the test failure can be not related and it's 
addressed on MAPREDUCE-6205. However, we should move using newer, we can say 
more proper, properties. Do you agree with fixing this?

 TestJobCounters should use new properties instead 
 JobConf.MAPRED_TASK_JAVA_OPTS
 ---

 Key: MAPREDUCE-6204
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204
 Project: Hadoop Map/Reduce
  Issue Type: Test
  Components: test
Affects Versions: 2.6.0
Reporter: sam liu
Assignee: sam liu
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204-2.patch, 
 MAPREDUCE-6204-3.patch, MAPREDUCE-6204-4.patch, MAPREDUCE-6204.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14541608#comment-14541608
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6364:
---

[~ryu_kobayashi], thank you for taking this issue. Please correct me if I'm 
wrong, but maybe you forgot replacing the last %s with attemptId?
{code}
+stateURL =
+String.format(/proxy/%s/ws/v1/mapreduce/jobs/%s/tasks/%s/
++ attempts, appID, jobID, taskID) + /%s/state;
{code}

IMHO, enableUIActions is preferred.
{code}
+  this.isUIActions =
+  conf.getBoolean(MRConfig.MASTER_WEBAPP_UI_ACTIONS_ENABLED,
+  MRConfig.DEFAULT_MASTER_WEBAPP_UI_ACTIONS_ENABLED);
{code}

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch


 Add a Kill link to Task Attempts page. Call in the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6364) Add a Kill link to Task Attempts page

2015-05-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14541602#comment-14541602
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6364:
---

Let me take a look.

 Add a Kill link to Task Attempts page
 ---

 Key: MAPREDUCE-6364
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6364
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Ryu Kobayashi
Assignee: Ryu Kobayashi
Priority: Minor
 Attachments: MAPREDUCE-6364-screenshot.png, MAPREDUCE-6364.1.patch, 
 MAPREDUCE-6364.2.patch


 Add a Kill link to Task Attempts page. Call in the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6366) TeraSort job's mapreduce.terasort.final.sync option doesn't work

2015-05-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14541490#comment-14541490
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6366:
---

Committing this shortly.

 TeraSort job's mapreduce.terasort.final.sync option doesn't work
 

 Key: MAPREDUCE-6366
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6366
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: examples
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome
Priority: Trivial
 Attachments: MAPREDUCE-6366.1.patch


 A TeraOutputFormat's field, finalSync, is always set true. Therefore 
 TeraSort 's  mapreduce.terasort.final.sync option doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6366) mapreduce.terasort.final.sync configuration in TeraSort doesn't work

2015-05-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6366:
--
Summary: mapreduce.terasort.final.sync configuration in TeraSort  doesn't 
work  (was: TeraSort job's mapreduce.terasort.final.sync option doesn't work)

 mapreduce.terasort.final.sync configuration in TeraSort  doesn't work
 -

 Key: MAPREDUCE-6366
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6366
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: examples
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome
Priority: Trivial
 Attachments: MAPREDUCE-6366.1.patch


 A TeraOutputFormat's field, finalSync, is always set true. Therefore 
 TeraSort 's  mapreduce.terasort.final.sync option doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6366) mapreduce.terasort.final.sync configuration in TeraSort doesn't work

2015-05-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6366:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~tfukudom] for your reporintg and 
contribution!

 mapreduce.terasort.final.sync configuration in TeraSort  doesn't work
 -

 Key: MAPREDUCE-6366
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6366
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: examples
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome
Priority: Trivial
 Fix For: 2.8.0

 Attachments: MAPREDUCE-6366.1.patch


 A TeraOutputFormat's field, finalSync, is always set true. Therefore 
 TeraSort 's  mapreduce.terasort.final.sync option doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6298) Job#toString throws an exception when not in state RUNNING

2015-05-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14539377#comment-14539377
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6298:
---

I agree with Gera's comment. I think we can merge it to branch-2 with the 
graceful manner.

[~lars_francke] looking forward to your patch!

 Job#toString throws an exception when not in state RUNNING
 --

 Key: MAPREDUCE-6298
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6298
 Project: Hadoop Map/Reduce
  Issue Type: Bug
Reporter: Lars Francke
Assignee: Lars Francke
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: MAPREDUCE-6298.1.patch


 Job#toString calls {{ensureState(JobState.RUNNING);}} as the very first 
 thing. That method causes an Exception to be thrown which is not nice.
 One thing this breaks is usage of Job on the Scala (e.g. Spark) REPL as that 
 calls toString after every invocation and that fails every time.
 I'll attach a patch that checks state and if it's RUNNING prints the original 
 message and if not prints something else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6366) TeraSort job's mapreduce.terasort.final.sync option doesn't work

2015-05-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6366:
--
Description: A TeraOutputFormat's field, finalSync, is always set true. 
Therefore TeraSort 's  mapreduce.terasort.final.sync option doesn't work.  
(was: TeraOutputFormat's filed finalSync is always set true. Therefore 
TeraSort 's  mapreduce.terasort.final.sync option doesn't work.)

 TeraSort job's mapreduce.terasort.final.sync option doesn't work
 

 Key: MAPREDUCE-6366
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6366
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: examples
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome
Priority: Trivial
 Attachments: MAPREDUCE-6366.1.patch


 A TeraOutputFormat's field, finalSync, is always set true. Therefore 
 TeraSort 's  mapreduce.terasort.final.sync option doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MAPREDUCE-6366) TeraSort job's mapreduce.terasort.final.sync option doesn't work

2015-05-12 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated MAPREDUCE-6366:
--
Hadoop Flags: Reviewed
  Status: Patch Available  (was: Open)

 TeraSort job's mapreduce.terasort.final.sync option doesn't work
 

 Key: MAPREDUCE-6366
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6366
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: examples
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome
Priority: Trivial
 Attachments: MAPREDUCE-6366.1.patch


 A TeraOutputFormat's field, finalSync, is always set true. Therefore 
 TeraSort 's  mapreduce.terasort.final.sync option doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MAPREDUCE-6366) TeraSort job's mapreduce.terasort.final.sync option doesn't work

2015-05-12 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14541405#comment-14541405
 ] 

Tsuyoshi Ozawa commented on MAPREDUCE-6366:
---

Forgot to run Jenkins. Submitting a patch and pending for Jenkins.

 TeraSort job's mapreduce.terasort.final.sync option doesn't work
 

 Key: MAPREDUCE-6366
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6366
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: examples
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome
Priority: Trivial
 Attachments: MAPREDUCE-6366.1.patch


 A TeraOutputFormat's field, finalSync, is always set true. Therefore 
 TeraSort 's  mapreduce.terasort.final.sync option doesn't work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   >