[jira] [Commented] (MAPREDUCE-6661) hostLocalAssigned remains zero after scheduling

2016-07-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387134#comment-15387134
 ] 

Haibo Chen commented on MAPREDUCE-6661:
---

[~sultan] hostLocalAssigned is the # of mappers that ran on nodes where their 
input is also stored locally. A mapper can be scheduled to run on data-local 
nodes, rack-local nodes or anywhere in the cluster. The fact that 
hostLocalAssigned remains zero while rackLocalAssinged gets updated simply 
indicates that the mappers were not able to be scheduled on data-local nodes, 
so they were run on rack-local nodes.

> hostLocalAssigned remains zero after scheduling 
> 
>
> Key: MAPREDUCE-6661
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6661
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Sultan Alamro
>Priority: Critical
>
> In the AM log file I see that the hostLocalAssigned in 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator remains zero even 
> after scheduling, but the rackLocalAssigned gets updated (I run it in one 
> rack)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Assigned] (MAPREDUCE-6638) Jobs with encrypted spills don't recover if the AM goes down

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned MAPREDUCE-6638:
-

Assignee: Haibo Chen

> Jobs with encrypted spills don't recover if the AM goes down
> 
>
> Key: MAPREDUCE-6638
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6638
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster
>Affects Versions: 2.7.2
>Reporter: Karthik Kambatla
>Assignee: Haibo Chen
>Priority: Critical
>
> Post the fix to CVE-2015-1776, jobs with ecrypted spills enabled cannot be 
> recovered if the AM fails. We should store the key some place safe so they 
> can actually be recovered. If there is no "safe" place, at least we should 
> restart the job by re-running all mappers/reducers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Assigned] (MAPREDUCE-6631) shuffle handler would benefit from per-local-dir threads

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned MAPREDUCE-6631:
-

Assignee: Haibo Chen

> shuffle handler would benefit from per-local-dir threads
> 
>
> Key: MAPREDUCE-6631
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6631
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Haibo Chen
>
> [~jlowe] and I discussed this while investigating I/O starvation we have been 
> seeing on our clusters lately (possibly amplified by increased tez 
> workloads). 
> If a particular disk is being slow, it is very likely that all shuffle netty 
> threads will be blocked on the read side of sendfile(). (sendfile() is 
> asynchronous on the outbound socket side, but not on the read side.) This 
> causes the entire shuffle subsystem to slow down. 
> It seems like we could make the netty threads more asynchronous by 
> introducing a small set of threads per local-dir that are responsible for the 
> actual sendfile() invocations.
> This would not only improve shuffles that span drives, but also improve 
> situations where there is a single large shuffle from a single local-dir. It 
> would allow other drives to continue serving shuffle requests, AND avoid a 
> large number of readers (2X number_of_cores by default) all fighting for the 
> same drive, which becomes unfair to everything else on the system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6581) Shuffle failure incase of NativeMapOutputCollectorDelegator with intermediate-data encrypt

2016-07-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387121#comment-15387121
 ] 

Haibo Chen commented on MAPREDUCE-6581:
---

Data corruption on the mapper node may have caused this issue. 

> Shuffle failure incase of NativeMapOutputCollectorDelegator with 
> intermediate-data encrypt
> --
>
> Key: MAPREDUCE-6581
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6581
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Priority: Blocker
>
> *Steps to reproduce*
> # Create data with teragen
> # Run terasort on data prepared using teragen
> Commands used 
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar 
> teragen 1024000 /Terainput1
> ./yarn jar 
> ../share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar 
> terasort -Dmapreduce.job.encrypted-intermediate-data=true 
> -Dmapreduce.job.map.output.collector.class=org.apache.hadoop.mapred.nativetask.NativeMapOutputCollectorDelegator
>   -Dmapreduce.map.output.compress=true  
> -Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
>  -Dmapreduce.output.fileoutputformat.compress=true 
> -Dmapreduce.output.fileoutputformat.compress.type=BLOCK 
> -Dmapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec
>  -Dmapreduce.reduce.memory.mb=1024 /Terainput1/Teraout12
> {noformat}
> 15/12/18 23:07:57 INFO mapreduce.Job: Task Id : 
> attempt_1450453391718_0017_r_00_2, Status : FAILED
> Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
> shuffle in fetcher#5
>   at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
>   at 
> org.apache.hadoop.io.compress.snappy.SnappyDecompressor.setInput(SnappyDecompressor.java:107)
>   at 
> org.apache.hadoop.io.compress.BlockDecompressorStream.decompress(BlockDecompressorStream.java:104)
>   at 
> org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.InMemoryMapOutput.doShuffle(InMemoryMapOutput.java:90)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.IFileWrappedMapOutput.shuffle(IFileWrappedMapOutput.java:63)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyMapOutput(Fetcher.java:538)
>   at 
> org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:336)
>   at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:193)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Assigned] (MAPREDUCE-6506) Make the reducer-preemption configs consistent in how they handle defaults

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned MAPREDUCE-6506:
-

Assignee: Haibo Chen

> Make the reducer-preemption configs consistent in how they handle defaults
> --
>
> Key: MAPREDUCE-6506
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6506
> Project: Hadoop Map/Reduce
>  Issue Type: Task
>  Components: applicationmaster
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Haibo Chen
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Resolved] (MAPREDUCE-6482) Since jvm reuse is not supported in hadoop 2.x, why I still find mapreduce.job.jvm.numtasks in mapre-default.xml

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved MAPREDUCE-6482.
---
Resolution: Invalid

mapreduce.job.jvm.numtasks is no longer in mapred-default.xml

> Since jvm reuse is not supported in hadoop 2.x, why I still find 
> mapreduce.job.jvm.numtasks in mapre-default.xml
> 
>
> Key: MAPREDUCE-6482
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6482
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.0, 2.7.0, 2.7.1
>Reporter: Wei Chen
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6401) Container-launch failure gives no debugging output

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6401:
--
Component/s: (was: mrv2)
 nodemanager

> Container-launch failure gives no debugging output
> --
>
> Key: MAPREDUCE-6401
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6401
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
> Environment: HDP 2.2
>Reporter: Hari Sekhon
>Priority: Minor
> Attachments: job.log
>
>
> MR jobs are failing on my cluster with Stack trace: ExitCodeException 
> exitCode=7 but little else in terms of debugging information. Can we please 
> improve the debugging info? Log file is attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Resolved] (MAPREDUCE-6131) Integer overflow in RMContainerAllocator results in starvation of applications

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved MAPREDUCE-6131.
---
Resolution: Invalid

> Integer overflow in RMContainerAllocator results in starvation of applications
> --
>
> Key: MAPREDUCE-6131
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6131
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Kamal Kc
> Attachments: MAPREDUCE-6131-2.2.0.patch
>
>
> When processing large datasets, Hadoop encounters a scenario where all
>  containers run reduce tasks and no map tasks are scheduled. The 
> application does not fail but rather remains in this state without making 
> any forward progress. It then has to be manually terminated. 
> This bug is due to integer overflow in scheduleReduces() of 
> RMContainerAllocator. The variable netScheduledMapMem overflows for 
> large data sizes, takes negative value, and results in a large 
> finalReduceMemLimit and a large rampup value. In almost all cases, this 
> large rampup value is greater than the total number of reduce tasks. 
> Therefore, the AM tries to assign all reduce tasks. And if the total number 
> of reduce tasks is greater than the total container slots, then all slots are 
> taken up by reduce tasks, leaving none for maps. 
> With 128MB block size and 2GB map container size, overflow occurs with 128 TB 
> data size. An example scenario for the reproduction is: 
> - Input data size of 32TB, block size 128MB, Map container size = 10GB,
> reduce container size = 10GB, #reducers = 50,  cluster mem capacity =  7 x 
> 40GB, slowstart=0.0
> Better resolution might be to change the variables used in 
> RMContainerAllocator from int to long. A simpler fix instead would be to 
> only change the local variables of scheduleReduces() to long data types. 
> Patch is attached for 2.2.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387090#comment-15387090
 ] 

Hadoop QA commented on MAPREDUCE-6740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: 
The patch generated 1 new + 3 unchanged - 1 fixed = 4 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 35s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 47s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819237/mapreduce6740.001.patch
 |
| JIRA Issue | MAPREDUCE-6740 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bad9082b10d9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e340064 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6633/artifact/patchprocess/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6633/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6633/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Enforce mapreduce.task.timeout to be at least 
> mapreduce.task.progress-report.interval
> -
>
>

[jira] [Commented] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15387038#comment-15387038
 ] 

Hadoop QA commented on MAPREDUCE-6740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: 
The patch generated 1 new + 3 unchanged - 1 fixed = 4 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 50s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819237/mapreduce6740.001.patch
 |
| JIRA Issue | MAPREDUCE-6740 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux da70d8072e4a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e340064 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6632/artifact/patchprocess/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6632/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6632/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Enforce mapreduce.task.timeout to be at least 
> mapreduce.task.progress-report.interval
> -
>
>   

[jira] [Resolved] (MAPREDUCE-6687) Allow specifing java home via job configuration

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved MAPREDUCE-6687.
---
Resolution: Implemented

> Allow specifing java home via job configuration
> ---
>
> Key: MAPREDUCE-6687
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6687
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>  Components: applicationmaster
>Reporter: He Tianyi
>Priority: Minor
>
> Suggest allowing user to use a preferred JVM implementation (or version) by 
> specifying java home via JobConf, to launch Map/Reduce tasks. 
> Especially useful for running A/B tests on real workload or benchmark between 
> JVM implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-5124) AM lacks flow control for task events

2016-07-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386931#comment-15386931
 ] 

Haibo Chen commented on MAPREDUCE-5124:
---

Turns out MapReduce-6242 has already made the task heartbeat period 
configurable (mapreduce.task.progress-report.interval). Will leave this to 
implement the other approach, automatic task flow control, as Jason suggested.

> AM lacks flow control for task events
> -
>
> Key: MAPREDUCE-5124
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5124
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Haibo Chen
> Attachments: MAPREDUCE-5124-proto.2.txt, MAPREDUCE-5124-prototype.txt
>
>
> The AM does not have any flow control to limit the incoming rate of events 
> from tasks.  If the AM is unable to keep pace with the rate of incoming 
> events for a sufficient period of time then it will eventually exhaust the 
> heap and crash.  MAPREDUCE-5043 addressed a major bottleneck for event 
> processing, but the AM could still get behind if it's starved for CPU and/or 
> handling a very large job with tens of thousands of active tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6740:
--
Attachment: mapreduce6740.001.patch

> Enforce mapreduce.task.timeout to be at least 
> mapreduce.task.progress-report.interval
> -
>
> Key: MAPREDUCE-6740
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6740
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Affects Versions: 2.8.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: mapreduce6740.001.patch
>
>
> Mapreduce-6242 makes task status update interval configurable to ease the 
> pressure on MR AM to process status updates, but it did not ensure that 
> mapreduce.task.timeout is no smaller than the configured value of task report 
> interval. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6740:
--
Status: Patch Available  (was: Open)

> Enforce mapreduce.task.timeout to be at least 
> mapreduce.task.progress-report.interval
> -
>
> Key: MAPREDUCE-6740
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6740
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Affects Versions: 2.8.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: mapreduce6740.001.patch
>
>
> Mapreduce-6242 makes task status update interval configurable to ease the 
> pressure on MR AM to process status updates, but it did not ensure that 
> mapreduce.task.timeout is no smaller than the configured value of task report 
> interval. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6682) TestMRCJCFileOutputCommitter fails intermittently

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386858#comment-15386858
 ] 

Akira Ajisaka commented on MAPREDUCE-6682:
--

Hi [~brahmareddy], would you review this?

> TestMRCJCFileOutputCommitter fails intermittently
> -
>
> Key: MAPREDUCE-6682
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6682
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>Assignee: Akira Ajisaka
> Attachments: MAPREDUCE-6682.00.patch, MAPREDUCE-6682.01.patch, 
> MAPREDUCE-6682.02.patch
>
>
> {noformat}
> java.lang.AssertionError: Output directory not empty expected:<0> but was:<4>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.mapred.TestMRCJCFileOutputCommitter.testAbort(TestMRCJCFileOutputCommitter.java:153)
> {noformat}
> *PreCommit Report* 
> https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6434/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386842#comment-15386842
 ] 

Akira Ajisaka commented on MAPREDUCE-6738:
--

+1, thanks [~djp].

> TestJobListCache.testAddExisting failed intermiddently in slow VM testbed
> -
>
> Key: MAPREDUCE-6738
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 2.7.3
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: MAPREDUCE-6738.patch
>
>
> Kick off Jenkins test which occasionally failed for this test with stack 
> trace below: 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting
> java.lang.Exception: test timed out after 1000 milliseconds
>   at org.mockito.cglib.proxy.Enhancer.generateClass(Enhancer.java:483)
>   at 
> org.mockito.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:217)
>   at org.mockito.cglib.proxy.Enhancer.createHelper(Enhancer.java:378)
>   at org.mockito.cglib.proxy.Enhancer.createClass(Enhancer.java:318)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.createProxyClass(ClassImposterizer.java:93)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.imposterise(ClassImposterizer.java:50)
>   at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:54)
>   at org.mockito.internal.MockitoCore.mock(MockitoCore.java:45)
>   at org.mockito.Mockito.mock(Mockito.java:921)
>   at org.mockito.Mockito.mock(Mockito.java:816)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting(TestJobListCache.java:42)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6740:
--
Issue Type: Bug  (was: Improvement)

> Enforce mapreduce.task.timeout to be at least 
> mapreduce.task.progress-report.interval
> -
>
> Key: MAPREDUCE-6740
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6740
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Affects Versions: 2.8.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
>
> Mapreduce-6242 makes task status update interval configurable to ease the 
> pressure on MR AM to process status updates, but it did not ensure that 
> mapreduce.task.timeout is no smaller than the configured value of task report 
> interval. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-07-20 Thread Haibo Chen (JIRA)
Haibo Chen created MAPREDUCE-6740:
-

 Summary: Enforce mapreduce.task.timeout to be at least 
mapreduce.task.progress-report.interval
 Key: MAPREDUCE-6740
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6740
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: mr-am
Affects Versions: 2.8.0
Reporter: Haibo Chen
Assignee: Haibo Chen
Priority: Minor


Mapreduce-6242 makes task status update interval configurable to ease the 
pressure on MR AM to process status updates, but it did not ensure that 
mapreduce.task.timeout is no smaller than the configured value of task report 
interval. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6739) allow specifying range on the port that MR AM web server binds to

2016-07-20 Thread Haibo Chen (JIRA)
Haibo Chen created MAPREDUCE-6739:
-

 Summary: allow specifying range on the port that MR AM web server 
binds to
 Key: MAPREDUCE-6739
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6739
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
Reporter: Haibo Chen
Assignee: Haibo Chen


MR AM web server binds itself to an arbitrary port.  This means if the RM web 
proxy lives outside of a cluster, the whole port range needs to be wide open. 
It'd be nice to reuse yarn.app.mapreduce.am.job.client.port-range to place a 
port range restriction on MR AM web server, so that connection from outside the 
cluster can be restricted within a range of ports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6739) allow specifying range on the port that MR AM web server binds to

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6739:
--
Affects Version/s: 2.7.2

> allow specifying range on the port that MR AM web server binds to
> -
>
> Key: MAPREDUCE-6739
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6739
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mr-am
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: supportability
>
> MR AM web server binds itself to an arbitrary port.  This means if the RM web 
> proxy lives outside of a cluster, the whole port range needs to be wide open. 
> It'd be nice to reuse yarn.app.mapreduce.am.job.client.port-range to place a 
> port range restriction on MR AM web server, so that connection from outside 
> the cluster can be restricted within a range of ports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6739) allow specifying range on the port that MR AM web server binds to

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6739:
--
Labels: supportability  (was: )

> allow specifying range on the port that MR AM web server binds to
> -
>
> Key: MAPREDUCE-6739
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6739
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mr-am
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: supportability
>
> MR AM web server binds itself to an arbitrary port.  This means if the RM web 
> proxy lives outside of a cluster, the whole port range needs to be wide open. 
> It'd be nice to reuse yarn.app.mapreduce.am.job.client.port-range to place a 
> port range restriction on MR AM web server, so that connection from outside 
> the cluster can be restricted within a range of ports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6739) allow specifying range on the port that MR AM web server binds to

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6739:
--
Component/s: mr-am

> allow specifying range on the port that MR AM web server binds to
> -
>
> Key: MAPREDUCE-6739
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6739
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: mr-am
>Affects Versions: 2.7.2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: supportability
>
> MR AM web server binds itself to an arbitrary port.  This means if the RM web 
> proxy lives outside of a cluster, the whole port range needs to be wide open. 
> It'd be nice to reuse yarn.app.mapreduce.am.job.client.port-range to place a 
> port range restriction on MR AM web server, so that connection from outside 
> the cluster can be restricted within a range of ports.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386680#comment-15386680
 ] 

Hadoop QA commented on MAPREDUCE-6737:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819056/MAPREDUCE-6737.patch |
| JIRA Issue | MAPREDUCE-6737 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d987bb268404 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6631/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
|
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6631/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>

[jira] [Commented] (MAPREDUCE-6724) Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386657#comment-15386657
 ] 

Hadoop QA commented on MAPREDUCE-6724:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 2s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 8s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819126/mapreduce6724.006.patch
 |
| JIRA Issue | MAPREDUCE-6724 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 4d1d348dfe45 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6630/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6630/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()
> -
>
> Key: MAPREDUCE-6724
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6724
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  

[jira] [Commented] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386630#comment-15386630
 ] 

Hadoop QA commented on MAPREDUCE-6738:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 53s 
{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819179/MAPREDUCE-6738.patch |
| JIRA Issue | MAPREDUCE-6738 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 666dfdf8cb9d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6629/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6629/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestJobListCache.testAddExisting failed intermiddently in slow VM testbed
> -
>
> Key: MAPREDUCE-6738
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 2.7.3
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: MAPREDUCE-6738.patch
>
>
> Kick off 

[jira] [Commented] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386522#comment-15386522
 ] 

Hadoop QA commented on MAPREDUCE-6738:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 40s 
{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} 
|
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819179/MAPREDUCE-6738.patch |
| JIRA Issue | MAPREDUCE-6738 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b43b60a6f1c7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6628/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6628/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestJobListCache.testAddExisting failed intermiddently in slow VM testbed
> -
>
> Key: MAPREDUCE-6738
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 2.7.3
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: MAPREDUCE-6738.patch
>
>
> Kick off 

[jira] [Commented] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386463#comment-15386463
 ] 

Hadoop QA commented on MAPREDUCE-6737:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819056/MAPREDUCE-6737.patch |
| JIRA Issue | MAPREDUCE-6737 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5303c80d2a6c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6627/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
|
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6627/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>

[jira] [Updated] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated MAPREDUCE-6738:
--
Status: Patch Available  (was: Open)

> TestJobListCache.testAddExisting failed intermiddently in slow VM testbed
> -
>
> Key: MAPREDUCE-6738
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 2.7.3
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: MAPREDUCE-6738.patch
>
>
> Kick off Jenkins test which occasionally failed for this test with stack 
> trace below: 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting
> java.lang.Exception: test timed out after 1000 milliseconds
>   at org.mockito.cglib.proxy.Enhancer.generateClass(Enhancer.java:483)
>   at 
> org.mockito.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:217)
>   at org.mockito.cglib.proxy.Enhancer.createHelper(Enhancer.java:378)
>   at org.mockito.cglib.proxy.Enhancer.createClass(Enhancer.java:318)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.createProxyClass(ClassImposterizer.java:93)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.imposterise(ClassImposterizer.java:50)
>   at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:54)
>   at org.mockito.internal.MockitoCore.mock(MockitoCore.java:45)
>   at org.mockito.Mockito.mock(Mockito.java:921)
>   at org.mockito.Mockito.mock(Mockito.java:816)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting(TestJobListCache.java:42)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated MAPREDUCE-6738:
--
Attachment: MAPREDUCE-6738.patch

Upload a quick patch to fix it.

> TestJobListCache.testAddExisting failed intermiddently in slow VM testbed
> -
>
> Key: MAPREDUCE-6738
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 2.7.3
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
> Attachments: MAPREDUCE-6738.patch
>
>
> Kick off Jenkins test which occasionally failed for this test with stack 
> trace below: 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting
> java.lang.Exception: test timed out after 1000 milliseconds
>   at org.mockito.cglib.proxy.Enhancer.generateClass(Enhancer.java:483)
>   at 
> org.mockito.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:217)
>   at org.mockito.cglib.proxy.Enhancer.createHelper(Enhancer.java:378)
>   at org.mockito.cglib.proxy.Enhancer.createClass(Enhancer.java:318)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.createProxyClass(ClassImposterizer.java:93)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.imposterise(ClassImposterizer.java:50)
>   at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:54)
>   at org.mockito.internal.MockitoCore.mock(MockitoCore.java:45)
>   at org.mockito.Mockito.mock(Mockito.java:921)
>   at org.mockito.Mockito.mock(Mockito.java:816)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting(TestJobListCache.java:42)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386438#comment-15386438
 ] 

Junping Du commented on MAPREDUCE-6738:
---

It is very occasionally to reproduce the failure locally, but Apache Jenkins 
also report this intermittently failure in previous run, e.g: 
http://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-dev/201512.mbox/%3C1730200133.9467.1448999224433.JavaMail.jenkins@crius%3E
Will deliver a quick fix to increase timeout value for the test.

> TestJobListCache.testAddExisting failed intermiddently in slow VM testbed
> -
>
> Key: MAPREDUCE-6738
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
> Project: Hadoop Map/Reduce
>  Issue Type: Test
>Affects Versions: 2.7.3
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Minor
>
> Kick off Jenkins test which occasionally failed for this test with stack 
> trace below: 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting
> java.lang.Exception: test timed out after 1000 milliseconds
>   at org.mockito.cglib.proxy.Enhancer.generateClass(Enhancer.java:483)
>   at 
> org.mockito.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
>   at 
> org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:217)
>   at org.mockito.cglib.proxy.Enhancer.createHelper(Enhancer.java:378)
>   at org.mockito.cglib.proxy.Enhancer.createClass(Enhancer.java:318)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.createProxyClass(ClassImposterizer.java:93)
>   at 
> org.mockito.internal.creation.jmock.ClassImposterizer.imposterise(ClassImposterizer.java:50)
>   at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:54)
>   at org.mockito.internal.MockitoCore.mock(MockitoCore.java:45)
>   at org.mockito.Mockito.mock(Mockito.java:921)
>   at org.mockito.Mockito.mock(Mockito.java:816)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting(TestJobListCache.java:42)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6738) TestJobListCache.testAddExisting failed intermiddently in slow VM testbed

2016-07-20 Thread Junping Du (JIRA)
Junping Du created MAPREDUCE-6738:
-

 Summary: TestJobListCache.testAddExisting failed intermiddently in 
slow VM testbed
 Key: MAPREDUCE-6738
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6738
 Project: Hadoop Map/Reduce
  Issue Type: Test
Affects Versions: 2.7.3
Reporter: Junping Du
Assignee: Junping Du
Priority: Minor


Kick off Jenkins test which occasionally failed for this test with stack trace 
below: 
org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting

java.lang.Exception: test timed out after 1000 milliseconds
at org.mockito.cglib.proxy.Enhancer.generateClass(Enhancer.java:483)
at 
org.mockito.cglib.core.DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at 
org.mockito.cglib.core.AbstractClassGenerator.create(AbstractClassGenerator.java:217)
at org.mockito.cglib.proxy.Enhancer.createHelper(Enhancer.java:378)
at org.mockito.cglib.proxy.Enhancer.createClass(Enhancer.java:318)
at 
org.mockito.internal.creation.jmock.ClassImposterizer.createProxyClass(ClassImposterizer.java:93)
at 
org.mockito.internal.creation.jmock.ClassImposterizer.imposterise(ClassImposterizer.java:50)
at org.mockito.internal.util.MockUtil.createMock(MockUtil.java:54)
at org.mockito.internal.MockitoCore.mock(MockitoCore.java:45)
at org.mockito.Mockito.mock(Mockito.java:921)
at org.mockito.Mockito.mock(Mockito.java:816)
at 
org.apache.hadoop.mapreduce.v2.hs.TestJobListCache.testAddExisting(TestJobListCache.java:42)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6365) Refactor JobResourceUploader#uploadFilesInternal

2016-07-20 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated MAPREDUCE-6365:
---
Fix Version/s: (was: 3.0.0-alpha2)
   2.9.0

> Refactor JobResourceUploader#uploadFilesInternal
> 
>
> Key: MAPREDUCE-6365
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6365
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: MAPREDUCE-6365-trunk-v1.patch
>
>
> JobResourceUploader#uploadFilesInternal is a large method and there are 
> similar pieces of code that could probably be pulled out into separate 
> methods.  This refactor would improve readability of the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6365) Refactor JobResourceUploader#uploadFilesInternal

2016-07-20 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386362#comment-15386362
 ] 

Chris Trezzo commented on MAPREDUCE-6365:
-

Thanks [~sjlee0]! The intention would be to get MapReduce support for Shared 
cache in 2.9.0, so backporting this refactor to 2.9.0 sounds great.

> Refactor JobResourceUploader#uploadFilesInternal
> 
>
> Key: MAPREDUCE-6365
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6365
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Fix For: 3.0.0-alpha2
>
> Attachments: MAPREDUCE-6365-trunk-v1.patch
>
>
> JobResourceUploader#uploadFilesInternal is a large method and there are 
> similar pieces of code that could probably be pulled out into separate 
> methods.  This refactor would improve readability of the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386339#comment-15386339
 ] 

Hadoop QA commented on MAPREDUCE-6737:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819056/MAPREDUCE-6737.patch |
| JIRA Issue | MAPREDUCE-6737 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bb0e5c3f173d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6626/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
|
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6626/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>

[jira] [Commented] (MAPREDUCE-6724) Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386319#comment-15386319
 ] 

Hadoop QA commented on MAPREDUCE-6724:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 5s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819126/mapreduce6724.006.patch
 |
| JIRA Issue | MAPREDUCE-6724 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 550a593e2212 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 38128ba |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6625/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6625/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()
> -
>
> Key: MAPREDUCE-6724
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6724
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  

[jira] [Commented] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386244#comment-15386244
 ] 

Hadoop QA commented on MAPREDUCE-6737:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 54s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819056/MAPREDUCE-6737.patch |
| JIRA Issue | MAPREDUCE-6737 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b79ea31c5871 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1c9d2ab |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6624/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
|
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6624/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>   

[jira] [Commented] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Roman Gavryliuk (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386149#comment-15386149
 ] 

Roman Gavryliuk commented on MAPREDUCE-6737:


I'm not sure if we really need a Unit test for replacing '-1' with '0'. This 
default value has never been used in existing Unit tests.

> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.7.0, 2.5.1
>Reporter: Roman Gavryliuk
> Attachments: MAPREDUCE-6737.patch
>
>
> The problem shows itself while recovering old apps information:
> 2016-07-18 16:08:35,031 WARN
> org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils: Unable to parse
> start time from job history file
> job_1468716177837_21790-1468845880296-username-applicationname-1468845889100-0-0-FAILED-root.queuename--1.jhist
> : java.lang.NumberFormatException: For input string: 
> ""
> The problem is in JobHistoryEventHandler.java class in the
> following part of code:
> //initialize the launchTime in the JobIndexInfo of MetaInfo
>   if(event.getHistoryEvent().getEventType() == EventType.JOB_INITED ){
> JobInitedEvent jie = (JobInitedEvent) event.getHistoryEvent();
> mi.getJobIndexInfo().setJobStartTime(jie.getLaunchTime());
> Because of job was not initialized properly, the 'if' statement takes value
> 'false' and .setJobStartTime() is not called.
> In JobIndexInfo constructor, we have a default value for jobStartTime:
> this.jobStartTime = -1;
> When history server recovers any application's info, it passes all parameters
> to array of strings jobDetails:
> String[] jobDetails = fileName.split(DELIMITER);
> Please note, DELIMETER is initialized in the following way:
> static final String DELIMITER = "-";
> So, jobDetails array has 10 elements - job ID, submit time, username, job 
> name,
> finish time, number of maps, number of reducers, job status, queue, and start
> time).
> If jobStartTime = -1, the minus symbol is considered as delimeter and the code
> will assign an empty string "" as a value for 9-th element in jobDetails 
> array.
> In org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils class a
> NumberFormatException will appear while trying to parse empty string to long
> type.
> Long.parseLong(decodeJobHistoryFileName(jobDetails[JOB_START_TIME_INDEX])));
> The most simple fix is to change the value this.jobStartTime to 0 in
> JobIndexInfo constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6735) Performance degradation caused by MAPREDUCE-5465 and HADOOP-12107

2016-07-20 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386142#comment-15386142
 ] 

Jason Lowe commented on MAPREDUCE-6735:
---

It would also be good to know more details on the baseline being used when 
these commits are added/removed so we know what other changes are 
present/absent.  Is this a 2.8 baseline with reverts on these two commits or 
something else?

> Performance degradation caused by MAPREDUCE-5465 and HADOOP-12107
> -
>
> Key: MAPREDUCE-6735
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6735
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Alexandr Balitsky
>
> Two commits, MAPREDUCE-5465 and HADOOP-12107 are making Terasort on YARN 10% 
> slower.
> Reduce phase with those commits ~5 mins
> Reduce phase without ~3.5 mins
> Average Reduce is taking 4mins, 16sec with those commits compared to 3mins, 
> 48sec without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (MAPREDUCE-6380) AggregatedLogDeletionService will throw exception when there are some other directories in remote-log-dir

2016-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385995#comment-15385995
 ] 

Varun Saxena edited comment on MAPREDUCE-6380 at 7/20/16 4:04 PM:
--

[~lewuathe]
bq. Currently listStatus is checking against root log dir. So returned list is 
user log dirs like /tmp/logs/userA, /tmp/logs/userB, /tmp/logs/userC.
Yes, the listStatus in FileDeletionTask#run does list against the root log dir. 
But I am not talking about that.
I am talking about call to listStatus inside 
FileDeletionTask#deleteOldLogDirsFrom (the method where actual deletion 
happens). Refer to code below based on latest trunk code in 
AggregatedLogDeletionService:Lines 82-87 (without applying your patch).
{code}
82. for(FileStatus userDir : fs.listStatus(remoteRootLogDir)) {
83.  if(userDir.isDirectory()) {
84.Path userDirPath = new Path(userDir.getPath(), suffix);
85.deleteOldLogDirsFrom(userDirPath, cutoffMillis, fs, rmClient);
86.  }
87. }
{code}
At line 84, userDirPath is created with the suffix(which is logs by default). 
This path will be of the form {{/tmp/logs/userA/logs}}.
The change in the patch is to call FileStatus#listStatusIterator on this 
userDirPath to check if the path exists or not (and continuing if 
FileNotFoundException is thrown).
But if you notice on line 85 in code snippet above, we call 
FileDeletionTask#deleteOldLogDirsFrom, wherein we pass the same userDirPath 
which we had created on line 84 (i.e. path of the form 
{{/tmp/logs/userA/logs}}).

Now, the code in deleteOldLogDirsFrom is something like below.
{code}
95.  private static void deleteOldLogDirsFrom(Path dir, long cutoffMillis, 
96.  FileSystem fs, ApplicationClientProtocol rmClient) {
97.try {
98.   for(FileStatus appDir : fs.listStatus(dir)) {
..
128. }
129.   } catch (IOException e) {
130. logIOException("Could not read the contents of " + dir, e);
131.   }
{code}

As you can see in this method, we again call {{listStatus}} on the passed dir 
which will be userDirPath (i.e. path of the form {{/tmp/logs/userA/logs}}).
And this is where the FileNotFoundException is thrown (without your changes). 
And the code does proceed and take on the next user dir because we do catch 
this FileNotFoundException inside deleteOldLogDirsFrom (as IOException is 
caught - see the code snippet above). This JIRA is only about stack trace 
printed over and over again I think which is why it is raised as trivial. The 
issue with app id format is a bigger issue though.

So my comment was that if we call listStatus already on userDirPath (paths like 
 {{/tmp/logs/userA/logs}}) inside deleteOldLogDirsFrom then what is the need of 
calling listStatusIterator on userDirPath in run() method ? This would lead to 
an additional RPC call to Namenode in positive case (i.e. when extraneous 
directories do not exist).


was (Author: varun_saxena):
[~lewuathe]
bq. Currently listStatus is checking against root log dir. So returned list is 
user log dirs like /tmp/logs/userA, /tmp/logs/userB, /tmp/logs/userC.
Yes, the listStatus in FileDeletionTask#run does list against the root log dir. 
But I am not talking about that.
I am talking about call to listStatus inside 
FileDeletionTask#deleteOldLogDirsFrom (the method where actual deletion 
happens). Refer to code below based on latest trunk code in 
AggregatedLogDeletionService:Lines 82-87 (without applying your patch).
{code}
82. for(FileStatus userDir : fs.listStatus(remoteRootLogDir)) {
83.  if(userDir.isDirectory()) {
84.Path userDirPath = new Path(userDir.getPath(), suffix);
85.deleteOldLogDirsFrom(userDirPath, cutoffMillis, fs, rmClient);
86.  }
87. }
{code}
At line 84, userDirPath is created with the suffix(which is logs by default). 
This path will be of the form {{/tmp/logs/userA/logs}}.
The change in the patch is to call FileStatus#listStatusIterator on this 
userDirPath to check if the path exists or not (and continuing if 
FileNotFoundException is thrown).
But if you notice on line 85 in code snippet above, we call 
FileDeletionTask#deleteOldLogDirsFrom, wherein we pass the same userDirPath 
which we had created on line 84 (i.e. path of the form 
{{/tmp/logs/userA/logs}}).

Now, the code in deleteOldLogDirsFrom is something like below.
{code}
95.  private static void deleteOldLogDirsFrom(Path dir, long cutoffMillis, 
96.  FileSystem fs, ApplicationClientProtocol rmClient) {
97.try {
98.   for(FileStatus appDir : fs.listStatus(dir)) {
..
128. }
129.   } catch (IOException e) {
130. logIOException("Could not read the contents of " + dir, e);
131.   }
{code}

As you can see in this method, we again call {{listStatus}} on the passed dir 
which will be userDirPath (i.e. path of the form {{/tmp/logs/userA/logs}}).
And this is where the FileNotFoundException is thrown (without your changes). 
And the 

[jira] [Updated] (MAPREDUCE-6724) Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()

2016-07-20 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6724:
--
Attachment: mapreduce6724.006.patch

> Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()
> -
>
> Key: MAPREDUCE-6724
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6724
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: mapreduce6724.001.patch, mapreduce6724.002.patch, 
> mapreduce6724.003.patch, mapreduce6724.004.patch, mapreduce6724.005.patch, 
> mapreduce6724.006.patch
>
>
> When shuffle is done in memory, MergeManagerImpl converts the requested size 
> to an int to allocate an instance of InMemoryMapOutput. This results in an 
> overflow if the requested size is bigger than Integer.MAX_VALUE and 
> eventually causes the reducer to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6724) Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()

2016-07-20 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386063#comment-15386063
 ] 

Haibo Chen commented on MAPREDUCE-6724:
---

Uploading a new patch to have canShuffleToMemory inline. Thanks 
[~jira.shegalov] again for reviews. 

> Unsafe conversion from long to int in MergeManagerImpl.unconditionalReserve()
> -
>
> Key: MAPREDUCE-6724
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6724
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mrv2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: mapreduce6724.001.patch, mapreduce6724.002.patch, 
> mapreduce6724.003.patch, mapreduce6724.004.patch, mapreduce6724.005.patch, 
> mapreduce6724.006.patch
>
>
> When shuffle is done in memory, MergeManagerImpl converts the requested size 
> to an int to allocate an instance of InMemoryMapOutput. This results in an 
> overflow if the requested size is bigger than Integer.MAX_VALUE and 
> eventually causes the reducer to fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15386048#comment-15386048
 ] 

Hadoop QA commented on MAPREDUCE-6737:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-mapreduce-client-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819056/MAPREDUCE-6737.patch |
| JIRA Issue | MAPREDUCE-6737 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ad1633604e52 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 37362c2 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6623/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common 
|
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6623/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>   

[jira] [Comment Edited] (MAPREDUCE-6380) AggregatedLogDeletionService will throw exception when there are some other directories in remote-log-dir

2016-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385995#comment-15385995
 ] 

Varun Saxena edited comment on MAPREDUCE-6380 at 7/20/16 3:06 PM:
--

[~lewuathe]
bq. Currently listStatus is checking against root log dir. So returned list is 
user log dirs like /tmp/logs/userA, /tmp/logs/userB, /tmp/logs/userC.
Yes, the listStatus in FileDeletionTask#run does list against the root log dir. 
But I am not talking about that.
I am talking about call to listStatus inside 
FileDeletionTask#deleteOldLogDirsFrom (the method where actual deletion 
happens). Refer to code below based on latest trunk code in 
AggregatedLogDeletionService:Lines 82-87 (without applying your patch).
{code}
82. for(FileStatus userDir : fs.listStatus(remoteRootLogDir)) {
83.  if(userDir.isDirectory()) {
84.Path userDirPath = new Path(userDir.getPath(), suffix);
85.deleteOldLogDirsFrom(userDirPath, cutoffMillis, fs, rmClient);
86.  }
87. }
{code}
At line 84, userDirPath is created with the suffix(which is logs by default). 
This path will be of the form {{/tmp/logs/userA/logs}}.
The change in the patch is to call FileStatus#listStatusIterator on this 
userDirPath to check if the path exists or not (and continuing if 
FileNotFoundException is thrown).
But if you notice on line 85 in code snippet above, we call 
FileDeletionTask#deleteOldLogDirsFrom, wherein we pass the same userDirPath 
which we had created on line 84 (i.e. path of the form 
{{/tmp/logs/userA/logs}}).

Now, the code in deleteOldLogDirsFrom is something like below.
{code}
95.  private static void deleteOldLogDirsFrom(Path dir, long cutoffMillis, 
96.  FileSystem fs, ApplicationClientProtocol rmClient) {
97.try {
98.   for(FileStatus appDir : fs.listStatus(dir)) {
..
128. }
129.   } catch (IOException e) {
130. logIOException("Could not read the contents of " + dir, e);
131.   }
{code}

As you can see in this method, we again call {{listStatus}} on the passed dir 
which will be userDirPath (i.e. path of the form {{/tmp/logs/userA/logs}}).
And this is where the FileNotFoundException is thrown (without your changes). 
And the code does proceed and take on the next user dir because we do catch 
this FileNotFoundException inside deleteOldLogDirsFrom. This JIRA is only about 
stack trace printed over and over again I think which is why it is raised as 
trivial. The issue with app id format is a bigger issue though.

So my comment was that if we call listStatus already on userDirPath (paths like 
 {{/tmp/logs/userA/logs}}) inside deleteOldLogDirsFrom then what is the need of 
calling listStatusIterator on userDirPath in run() method ? This would lead to 
an additional RPC call to Namenode in positive case (i.e. when extraneous 
directories do not exist).


was (Author: varun_saxena):
bq. Currently listStatus is checking against root log dir. So returned list is 
user log dirs like /tmp/logs/userA, /tmp/logs/userB, /tmp/logs/userC.
Yes, the listStatus in FileDeletionTask#run does list against the root log dir. 
But I am not talking about that.
I am talking about call to listStatus inside 
FileDeletionTask#deleteOldLogDirsFrom (the method where actual deletion 
happens). Refer to code below based on latest trunk code in 
AggregatedLogDeletionService:Lines 82-87 (without applying your patch).
{code}
82. for(FileStatus userDir : fs.listStatus(remoteRootLogDir)) {
83.  if(userDir.isDirectory()) {
84.Path userDirPath = new Path(userDir.getPath(), suffix);
85.deleteOldLogDirsFrom(userDirPath, cutoffMillis, fs, rmClient);
86.  }
87. }
{code}
At line 84, userDirPath is created with the suffix(which is logs by default). 
This path will be of the form {{/tmp/logs/userA/logs}}.
The change in the patch is to call FileStatus#listStatusIterator on this 
userDirPath to check if the path exists or not (and continuing if 
FileNotFoundException is thrown).
But if you notice on line 85 in code snippet above, we call 
FileDeletionTask#deleteOldLogDirsFrom, wherein we pass the same userDirPath 
which we had created on line 84 (i.e. path of the form 
{{/tmp/logs/userA/logs}}).

Now, the code in deleteOldLogDirsFrom is something like below.
{code}
95.  private static void deleteOldLogDirsFrom(Path dir, long cutoffMillis, 
96.  FileSystem fs, ApplicationClientProtocol rmClient) {
97.try {
98.   for(FileStatus appDir : fs.listStatus(dir)) {
..
128. }
129.   } catch (IOException e) {
130. logIOException("Could not read the contents of " + dir, e);
131.   }
{code}

As you can see in this method, we again call {{listStatus}} on the passed dir 
which will be userDirPath (i.e. path of the form {{/tmp/logs/userA/logs}}).
And this is where the FileNotFoundException is thrown (without your changes). 
And the code does proceed and take on the next user dir because we do catch 

[jira] [Commented] (MAPREDUCE-6380) AggregatedLogDeletionService will throw exception when there are some other directories in remote-log-dir

2016-07-20 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385995#comment-15385995
 ] 

Varun Saxena commented on MAPREDUCE-6380:
-

bq. Currently listStatus is checking against root log dir. So returned list is 
user log dirs like /tmp/logs/userA, /tmp/logs/userB, /tmp/logs/userC.
Yes, the listStatus in FileDeletionTask#run does list against the root log dir. 
But I am not talking about that.
I am talking about call to listStatus inside 
FileDeletionTask#deleteOldLogDirsFrom (the method where actual deletion 
happens). Refer to code below based on latest trunk code in 
AggregatedLogDeletionService:Lines 82-87 (without applying your patch).
{code}
82. for(FileStatus userDir : fs.listStatus(remoteRootLogDir)) {
83.  if(userDir.isDirectory()) {
84.Path userDirPath = new Path(userDir.getPath(), suffix);
85.deleteOldLogDirsFrom(userDirPath, cutoffMillis, fs, rmClient);
86.  }
87. }
{code}
At line 84, userDirPath is created with the suffix(which is logs by default). 
This path will be of the form {{/tmp/logs/userA/logs}}.
The change in the patch is to call FileStatus#listStatusIterator on this 
userDirPath to check if the path exists or not (and continuing if 
FileNotFoundException is thrown).
But if you notice on line 85 in code snippet above, we call 
FileDeletionTask#deleteOldLogDirsFrom, wherein we pass the same userDirPath 
which we had created on line 84 (i.e. path of the form 
{{/tmp/logs/userA/logs}}).

Now, the code in deleteOldLogDirsFrom is something like below.
{code}
95.  private static void deleteOldLogDirsFrom(Path dir, long cutoffMillis, 
96.  FileSystem fs, ApplicationClientProtocol rmClient) {
97.try {
98.   for(FileStatus appDir : fs.listStatus(dir)) {
..
128. }
129.   } catch (IOException e) {
130. logIOException("Could not read the contents of " + dir, e);
131.   }
{code}

As you can see in this method, we again call {{listStatus}} on the passed dir 
which will be userDirPath (i.e. path of the form {{/tmp/logs/userA/logs}}).
And this is where the FileNotFoundException is thrown (without your changes). 
And the code does proceed and take on the next user dir because we do catch 
this FileNotFoundException inside deleteOldLogDirsFrom. This JIRA is only about 
stack trace printed over and over again I think which is why it is raised as 
trivial. The issue with app id format is a bigger issue though.

So my comment was that if we call listStatus already on userDirPath (paths like 
 {{/tmp/logs/userA/logs}}) inside deleteOldLogDirsFrom then what is the need of 
calling listStatusIterator on userDirPath in run() method ? This would lead to 
an additional RPC call to Namenode in positive case (i.e. when extraneous 
directories do not exist).

> AggregatedLogDeletionService will throw exception when there are some other 
> directories in remote-log-dir
> -
>
> Key: MAPREDUCE-6380
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6380
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: Zhang Wei
>Assignee: Kai Sasaki
>Priority: Trivial
> Attachments: MAPREDUCE-6380.01.patch, MAPREDUCE-6380.02.patch, 
> MAPREDUCE-6380.03.patch, MAPREDUCE-6380.04.patch, MAPREDUCE-6380.05.patch, 
> MAPREDUCE-6380.06.patch, MAPREDUCE-6380.07.patch
>
>
> AggregatedLogDeletionService will throw FileNotFoundException when there are 
> some extraneous directories put in remote-log-dir. The deletion function will 
> try to listStatus against the "extraneous-dir + suffix"  dir.  I think it 
> would be better  if  the function can ignore these directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6380) AggregatedLogDeletionService will throw exception when there are some other directories in remote-log-dir

2016-07-20 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15385837#comment-15385837
 ] 

Kai Sasaki commented on MAPREDUCE-6380:
---

[~varun_saxena] Thanks for detail explanation. What I'm trying to do in this 
JIRA is below.

Currently {{listStatue}} is checking against root log dir. So returned list is 
user log dirs like {{/tmp/logs/userA}}, {{/tmp/logs/userB}}, 
{{/tmp/logs/userC}}.
In this case, iterations is done for each user log dirs. (userA, userB, userC) 
But if {{userB}} does not have {{/tmp/logs/userB/logs}} dir, an exception is 
occurred and iteration will be stopped even if the later user log dirs have 
proper log dir like {{/tmp/logs/userC/logs}}. So the patch makes iteration keep 
to progress for {{userC}} even if {{/tmp/logs/userB}} does not have 
{{/tmp/logs/userB/logs}} dir. Therefore checking whether {{/tmp/logs/userB}} 
has {{logs}} dir and actual logs files are necessary. 

{quote}
If extraneous directories have to be considered, I see a bigger issue while 
listing app directories. If there is a spurious directory inside 
/tmp/logs/user/logs which is not in app id format (i.e. a directory like 
/tmp/logs/user/logs/dummy), it will be bigger problem.
{quote}
Yes, as you said the format of app logs should also be considered here.

> AggregatedLogDeletionService will throw exception when there are some other 
> directories in remote-log-dir
> -
>
> Key: MAPREDUCE-6380
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6380
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Reporter: Zhang Wei
>Assignee: Kai Sasaki
>Priority: Trivial
> Attachments: MAPREDUCE-6380.01.patch, MAPREDUCE-6380.02.patch, 
> MAPREDUCE-6380.03.patch, MAPREDUCE-6380.04.patch, MAPREDUCE-6380.05.patch, 
> MAPREDUCE-6380.06.patch, MAPREDUCE-6380.07.patch
>
>
> AggregatedLogDeletionService will throw FileNotFoundException when there are 
> some extraneous directories put in remote-log-dir. The deletion function will 
> try to listStatus against the "extraneous-dir + suffix"  dir.  I think it 
> would be better  if  the function can ignore these directories.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Roman Gavryliuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Gavryliuk updated MAPREDUCE-6737:
---
Attachment: MAPREDUCE-6737.patch

Attached patch by [~Tatyana But]

> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.7.0, 2.5.1
>Reporter: Roman Gavryliuk
> Attachments: MAPREDUCE-6737.patch
>
>
> The problem shows itself while recovering old apps information:
> 2016-07-18 16:08:35,031 WARN
> org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils: Unable to parse
> start time from job history file
> job_1468716177837_21790-1468845880296-username-applicationname-1468845889100-0-0-FAILED-root.queuename--1.jhist
> : java.lang.NumberFormatException: For input string: 
> ""
> The problem is in JobHistoryEventHandler.java class in the
> following part of code:
> //initialize the launchTime in the JobIndexInfo of MetaInfo
>   if(event.getHistoryEvent().getEventType() == EventType.JOB_INITED ){
> JobInitedEvent jie = (JobInitedEvent) event.getHistoryEvent();
> mi.getJobIndexInfo().setJobStartTime(jie.getLaunchTime());
> Because of job was not initialized properly, the 'if' statement takes value
> 'false' and .setJobStartTime() is not called.
> In JobIndexInfo constructor, we have a default value for jobStartTime:
> this.jobStartTime = -1;
> When history server recovers any application's info, it passes all parameters
> to array of strings jobDetails:
> String[] jobDetails = fileName.split(DELIMITER);
> Please note, DELIMETER is initialized in the following way:
> static final String DELIMITER = "-";
> So, jobDetails array has 10 elements - job ID, submit time, username, job 
> name,
> finish time, number of maps, number of reducers, job status, queue, and start
> time).
> If jobStartTime = -1, the minus symbol is considered as delimeter and the code
> will assign an empty string "" as a value for 9-th element in jobDetails 
> array.
> In org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils class a
> NumberFormatException will appear while trying to parse empty string to long
> type.
> Long.parseLong(decodeJobHistoryFileName(jobDetails[JOB_START_TIME_INDEX])));
> The most simple fix is to change the value this.jobStartTime to 0 in
> JobIndexInfo constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Roman Gavryliuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Gavryliuk updated MAPREDUCE-6737:
---
Status: Patch Available  (was: Open)

> HS: job history recovery fails with NumericFormatException if the job wasn't 
> initted properly
> -
>
> Key: MAPREDUCE-6737
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: jobhistoryserver
>Affects Versions: 2.5.1, 2.7.0
>Reporter: Roman Gavryliuk
> Attachments: MAPREDUCE-6737.patch
>
>
> The problem shows itself while recovering old apps information:
> 2016-07-18 16:08:35,031 WARN
> org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils: Unable to parse
> start time from job history file
> job_1468716177837_21790-1468845880296-username-applicationname-1468845889100-0-0-FAILED-root.queuename--1.jhist
> : java.lang.NumberFormatException: For input string: 
> ""
> The problem is in JobHistoryEventHandler.java class in the
> following part of code:
> //initialize the launchTime in the JobIndexInfo of MetaInfo
>   if(event.getHistoryEvent().getEventType() == EventType.JOB_INITED ){
> JobInitedEvent jie = (JobInitedEvent) event.getHistoryEvent();
> mi.getJobIndexInfo().setJobStartTime(jie.getLaunchTime());
> Because of job was not initialized properly, the 'if' statement takes value
> 'false' and .setJobStartTime() is not called.
> In JobIndexInfo constructor, we have a default value for jobStartTime:
> this.jobStartTime = -1;
> When history server recovers any application's info, it passes all parameters
> to array of strings jobDetails:
> String[] jobDetails = fileName.split(DELIMITER);
> Please note, DELIMETER is initialized in the following way:
> static final String DELIMITER = "-";
> So, jobDetails array has 10 elements - job ID, submit time, username, job 
> name,
> finish time, number of maps, number of reducers, job status, queue, and start
> time).
> If jobStartTime = -1, the minus symbol is considered as delimeter and the code
> will assign an empty string "" as a value for 9-th element in jobDetails 
> array.
> In org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils class a
> NumberFormatException will appear while trying to parse empty string to long
> type.
> Long.parseLong(decodeJobHistoryFileName(jobDetails[JOB_START_TIME_INDEX])));
> The most simple fix is to change the value this.jobStartTime to 0 in
> JobIndexInfo constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6736) import hive table from parquet files, there is no 'job.splitmetainfo' file message

2016-07-20 Thread Ha, Hun Cheol (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ha, Hun Cheol updated MAPREDUCE-6736:
-
Description: 
same issue : https://issues.apache.org/jira/browse/MAPREDUCE-3056

which is created on 2011.09.21 and fixed 2011.10.04 
there is user(Sergey) who has same issue on 2015.05.13 too! (look last comment 
of above link)

on beeline prompt, 
I've tried import hive table from parquet files that are exported from another 
hive table
but there was no "job.splitmetainfo" file on staging directory. So 
FileNotFoundException occurs

I tried with yarn user, hdfs user with different permissions and directory 
ownership, but nothing changed.

full log messages below

==

Log Type: syslog
Log Upload Time: Wed Jul 20 17:57:36 +0900 2016
Log Length: 21439
2016-07-20 17:57:26,139 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
application appattempt_1468834620182_0036_01
2016-07-20 17:57:26,417 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2016-07-20 17:57:26,463 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-07-20 17:57:26,463 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, 
Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@52af26ee)
2016-07-20 17:57:26,510 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2016-07-20 17:57:26,991 WARN [main] 
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
2016-07-20 17:57:27,091 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config 
null
2016-07-20 17:57:27,154 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 1
2016-07-20 17:57:27,159 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is 
org.apache.hadoop.tools.mapred.CopyCommitter
2016-07-20 17:57:27,231 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.jobhistory.EventType for class 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-07-20 17:57:27,232 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-07-20 17:57:27,233 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-07-20 17:57:27,234 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-07-20 17:57:27,235 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-07-20 17:57:27,241 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-07-20 17:57:27,242 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-07-20 17:57:27,243 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for 
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-07-20 17:57:27,292 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,322 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,352 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,382 INFO [main] 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after 
creating 448, Expected: 448
2016-07-20 17:57:27,387 INFO [main] 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job 
history data to the timeline server is not enabled
2016-07-20 17:57:27,428 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: 

[jira] [Created] (MAPREDUCE-6737) HS: job history recovery fails with NumericFormatException if the job wasn't initted properly

2016-07-20 Thread Roman Gavryliuk (JIRA)
Roman Gavryliuk created MAPREDUCE-6737:
--

 Summary: HS: job history recovery fails with 
NumericFormatException if the job wasn't initted properly
 Key: MAPREDUCE-6737
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6737
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: jobhistoryserver
Affects Versions: 2.5.1, 2.7.0
Reporter: Roman Gavryliuk


The problem shows itself while recovering old apps information:
2016-07-18 16:08:35,031 WARN
org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils: Unable to parse
start time from job history file
job_1468716177837_21790-1468845880296-username-applicationname-1468845889100-0-0-FAILED-root.queuename--1.jhist
: java.lang.NumberFormatException: For input string: 
""

The problem is in JobHistoryEventHandler.java class in the
following part of code:

//initialize the launchTime in the JobIndexInfo of MetaInfo
  if(event.getHistoryEvent().getEventType() == EventType.JOB_INITED ){
JobInitedEvent jie = (JobInitedEvent) event.getHistoryEvent();
mi.getJobIndexInfo().setJobStartTime(jie.getLaunchTime());

Because of job was not initialized properly, the 'if' statement takes value
'false' and .setJobStartTime() is not called.

In JobIndexInfo constructor, we have a default value for jobStartTime:
this.jobStartTime = -1;

When history server recovers any application's info, it passes all parameters
to array of strings jobDetails:
String[] jobDetails = fileName.split(DELIMITER);

Please note, DELIMETER is initialized in the following way:
static final String DELIMITER = "-";

So, jobDetails array has 10 elements - job ID, submit time, username, job name,
finish time, number of maps, number of reducers, job status, queue, and start
time).
If jobStartTime = -1, the minus symbol is considered as delimeter and the code
will assign an empty string "" as a value for 9-th element in jobDetails array.

In org.apache.hadoop.mapreduce.v2.jobhistory.FileNameIndexUtils class a
NumberFormatException will appear while trying to parse empty string to long
type.
Long.parseLong(decodeJobHistoryFileName(jobDetails[JOB_START_TIME_INDEX])));

The most simple fix is to change the value this.jobStartTime to 0 in
JobIndexInfo constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6736) import hive table from parquet files, there is no 'job.splitmetainfo' file message

2016-07-20 Thread Ha, Hun Cheol (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ha, Hun Cheol updated MAPREDUCE-6736:
-
Description: 
same issue : https://issues.apache.org/jira/browse/MAPREDUCE-3056

which is created on 2011.09.21 and fixed 2011.10.04 
there is user(Sergey) who has same issue on 2015.05.13 too! (look last comment 
of above link)

on beeline prompt, 
I've tried import hive table from parquet files that are exported from another 
hive table
but there was no "job.splitmetainfo" file on staging directory. So 
FileNotFoundException occurs

full log messages below

==

Log Type: syslog
Log Upload Time: Wed Jul 20 17:57:36 +0900 2016
Log Length: 21439
2016-07-20 17:57:26,139 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
application appattempt_1468834620182_0036_01
2016-07-20 17:57:26,417 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2016-07-20 17:57:26,463 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-07-20 17:57:26,463 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, 
Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@52af26ee)
2016-07-20 17:57:26,510 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2016-07-20 17:57:26,991 WARN [main] 
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
2016-07-20 17:57:27,091 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config 
null
2016-07-20 17:57:27,154 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 1
2016-07-20 17:57:27,159 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is 
org.apache.hadoop.tools.mapred.CopyCommitter
2016-07-20 17:57:27,231 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.jobhistory.EventType for class 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-07-20 17:57:27,232 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-07-20 17:57:27,233 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-07-20 17:57:27,234 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-07-20 17:57:27,235 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-07-20 17:57:27,241 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-07-20 17:57:27,242 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-07-20 17:57:27,243 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for 
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-07-20 17:57:27,292 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,322 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,352 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,382 INFO [main] 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Perms after 
creating 448, Expected: 448
2016-07-20 17:57:27,387 INFO [main] 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job 
history data to the timeline server is not enabled
2016-07-20 17:57:27,428 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class 

[jira] [Updated] (MAPREDUCE-6736) import hive table from parquet files, there is no 'job.splitmetainfo' file message

2016-07-20 Thread Ha, Hun Cheol (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ha, Hun Cheol updated MAPREDUCE-6736:
-
Environment: 
Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)
Hadoop 2.6.0-cdh5.7.0

  was:
Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)

Hadoop 2.6.0-cdh5.7.0
Subversion http://github.com/cloudera/hadoop -r 
c00978c67b0d3fe9f3b896b5030741bd40bf541a
Compiled by jenkins on 2016-03-23T18:36Z
Compiled with protoc 2.5.0
>From source with checksum b2eabfa328e763c88cb14168f9b372
This command was run using /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.7.0.jar


> import hive table from parquet files, there is no 'job.splitmetainfo' file 
> message
> --
>
> Key: MAPREDUCE-6736
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6736
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: applicationmaster, mrv2
>Affects Versions: 2.6.0
> Environment: Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)
> Hadoop 2.6.0-cdh5.7.0
>Reporter: Ha, Hun Cheol
>Priority: Blocker
>
> same issue : https://issues.apache.org/jira/browse/MAPREDUCE-3056
> which is created on 2011.09.21 and fixed 2011.10.04 
> there is user(Sergey) who has same issue on 2015.05.13 too! (look last 
> comment of above link)
> on beeline prompt, try import hive table from parquet files that are exported 
> from another hive table, there is no 'job.splitmetainfo' file on staging 
> directory : FileNotFoundException occurs
> full log messages below
> ==
> Log Type: syslog
> Log Upload Time: Wed Jul 20 17:57:36 +0900 2016
> Log Length: 21439
> 2016-07-20 17:57:26,139 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1468834620182_0036_01
> 2016-07-20 17:57:26,417 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2016-07-20 17:57:26,463 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
> 2016-07-20 17:57:26,463 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, 
> Service: , Ident: 
> (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@52af26ee)
> 2016-07-20 17:57:26,510 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
> 2016-07-20 17:57:26,991 WARN [main] 
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> 2016-07-20 17:57:27,091 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config 
> null
> 2016-07-20 17:57:27,154 INFO [main] 
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
> Committer Algorithm version is 1
> 2016-07-20 17:57:27,159 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is 
> org.apache.hadoop.tools.mapred.CopyCommitter
> 2016-07-20 17:57:27,231 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.jobhistory.EventType for class 
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> 2016-07-20 17:57:27,232 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
> 2016-07-20 17:57:27,233 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
> 2016-07-20 17:57:27,234 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
> 2016-07-20 17:57:27,235 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class 
> org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
> 2016-07-20 17:57:27,241 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
> 2016-07-20 17:57:27,242 INFO [main] 
> org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
> org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class 
> 

[jira] [Created] (MAPREDUCE-6736) import hive table from parquet files, there is no 'job.splitmetainfo' file message

2016-07-20 Thread Ha, Hun Cheol (JIRA)
Ha, Hun Cheol created MAPREDUCE-6736:


 Summary: import hive table from parquet files, there is no 
'job.splitmetainfo' file message
 Key: MAPREDUCE-6736
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6736
 Project: Hadoop Map/Reduce
  Issue Type: Bug
  Components: applicationmaster, mrv2
Affects Versions: 2.6.0
 Environment: Ubuntu 14.04.4 LTS (GNU/Linux 4.2.0-27-generic x86_64)

Hadoop 2.6.0-cdh5.7.0
Subversion http://github.com/cloudera/hadoop -r 
c00978c67b0d3fe9f3b896b5030741bd40bf541a
Compiled by jenkins on 2016-03-23T18:36Z
Compiled with protoc 2.5.0
>From source with checksum b2eabfa328e763c88cb14168f9b372
This command was run using /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.7.0.jar
Reporter: Ha, Hun Cheol
Priority: Blocker


same issue : https://issues.apache.org/jira/browse/MAPREDUCE-3056
which is created on 2011.09.21 and fixed 2011.10.04 
there is user(Sergey) who has same issue on 2015.05.13 too! (look last comment 
of above link)

on beeline prompt, try import hive table from parquet files that are exported 
from another hive table, there is no 'job.splitmetainfo' file on staging 
directory : FileNotFoundException occurs

full log messages below

==

Log Type: syslog
Log Upload Time: Wed Jul 20 17:57:36 +0900 2016
Log Length: 21439
2016-07-20 17:57:26,139 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
application appattempt_1468834620182_0036_01
2016-07-20 17:57:26,417 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
Unable to load native-hadoop library for your platform... using builtin-java 
classes where applicable
2016-07-20 17:57:26,463 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2016-07-20 17:57:26,463 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, 
Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@52af26ee)
2016-07-20 17:57:26,510 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
2016-07-20 17:57:26,991 WARN [main] 
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
2016-07-20 17:57:27,091 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config 
null
2016-07-20 17:57:27,154 INFO [main] 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output 
Committer Algorithm version is 1
2016-07-20 17:57:27,159 INFO [main] 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is 
org.apache.hadoop.tools.mapred.CopyCommitter
2016-07-20 17:57:27,231 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.jobhistory.EventType for class 
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2016-07-20 17:57:27,232 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2016-07-20 17:57:27,233 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2016-07-20 17:57:27,234 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2016-07-20 17:57:27,235 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class 
org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2016-07-20 17:57:27,241 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2016-07-20 17:57:27,242 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class 
org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2016-07-20 17:57:27,243 INFO [main] 
org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class 
org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for 
class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2016-07-20 17:57:27,292 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system 
[hdfs://da74:8020]
2016-07-20 17:57:27,322 INFO [main] 
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system