[jira] [Commented] (MAPREDUCE-6362) History Plugin should be updated

2016-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425806#comment-15425806
 ] 

Hadoop QA commented on MAPREDUCE-6362:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: The patch generated 8 new + 42 unchanged - 1 fixed 
= 50 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common 
generated 1 new + 4579 unchanged - 0 fixed = 4580 total (was 4579) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy 
generated 1 new + 25 unchanged - 0 fixed = 26 total (was 25) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins
 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s 
{color} | {color:green} hadoop-mapreduce-client-hs-plugins in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12731975/MAPREDUCE-6362.patch |
| JIRA Issue | MAPREDUCE-6362 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux eb64649f153c 3.13.0-36-lowlatency 

[jira] [Commented] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock

2016-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425755#comment-15425755
 ] 

Hadoop QA commented on MAPREDUCE-6541:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s {color} 
| {color:red} MAPREDUCE-6541 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12771973/MAPREDUCE-6541.01.patch
 |
| JIRA Issue | MAPREDUCE-6541 |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6676/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Exclude scheduled reducer memory when calculating available mapper slots from 
> headroom to avoid deadlock 
> -
>
> Key: MAPREDUCE-6541
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: MAPREDUCE-6541.01.patch
>
>
> We saw a MR deadlock recently:
> - When NM restarted by framework without enable recovery, containers running 
> on these nodes will be identified as "ABORTED", and MR AM will try to 
> reschedule "ABORTED" mapper containers.
> - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper 
> priority (priority=20) to such mapper requests. If there's any pending 
> reducer (priority=10) at the same time, mapper requests need to wait for 
> reducer requests satisfied.
> - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM 
> available resource = mapper-request = (700+ MB), only one job was running in 
> the system so scheduler cannot allocate more reducer containers AND MR-AM 
> thinks there're enough headroom for mapper so reducer containers will not be 
> preempted.
> MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think 
> we may need to exclude scheduled reducers resource when calculating 
> #available-mapper-slots from headroom. Which we can avoid excessive reducer 
> preemption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6711) JobImpl fails to handle preemption events on state COMMITTING

2016-08-17 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-6711:
---
Target Version/s: 2.7.4  (was: 2.7.3)

2.7.3 is under release process, changing target-version to 2.7.4.

> JobImpl fails to handle preemption events on state COMMITTING
> -
>
> Key: MAPREDUCE-6711
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6711
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Prabhu Joseph
> Attachments: MAPREDUCE-6711.1.patch, MAPREDUCE-6711.patch
>
>
> When a MR app being preempted on COMMITTING state, we saw the following 
> exceptions in its log:
> {code}
> ERROR [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
> at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> JOB_TASK_ATTEMPT_COMPLETED at COMMITTING
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1289)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1285)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:108)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> and 
> {code}
> ERROR [AsyncDispatcher event handler] 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Can't handle this event 
> at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: 
> JOB_MAP_TASK_RESCHEDULED at COMMITTING
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:996)
> at 
> org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:138)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1289)
> at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1285)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:108)
> at java.lang.Thread.run(Thread.java:744)
> {code}
> Seems like we need to handle those preemption related events when the job is 
> being committed? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6362) History Plugin should be updated

2016-08-17 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-6362:
---
Target Version/s: 2.7.4  (was: 2.7.3)

2.7.3 is under release process, changing target-version to 2.7.4.

> History Plugin should be updated
> 
>
> Key: MAPREDUCE-6362
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6362
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Mit Desai
>Assignee: Mit Desai
> Attachments: MAPREDUCE-6362.patch
>
>
> As applications complete, the RM tracks their IDs in a completed list. This 
> list is routinely truncated to limit the total number of application 
> remembered by the RM.
> When a user clicks the History for a job, either the browser is redirected to 
> the application's tracking link obtained from the stored application 
> instance. But when the application has been purged from the RM, an error is 
> displayed.
> In very busy clusters the rate at which applications complete can cause 
> applications to be purged from the RM's internal list within hours, which 
> breaks the proxy URLs users have saved for their jobs.
> We would like the RM to provide valid tracking links persist so that users 
> are not frustrated by broken links.
> With the current plugin in place, redirections for the Mapreduce jobs works 
> but we need the add functionality for tez jobs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6541) Exclude scheduled reducer memory when calculating available mapper slots from headroom to avoid deadlock

2016-08-17 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated MAPREDUCE-6541:
---
Target Version/s: 2.7.4  (was: 2.7.3)

2.7.3 is under release process, changing target-version to 2.7.4.

> Exclude scheduled reducer memory when calculating available mapper slots from 
> headroom to avoid deadlock 
> -
>
> Key: MAPREDUCE-6541
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6541
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Wangda Tan
>Assignee: Varun Saxena
> Attachments: MAPREDUCE-6541.01.patch
>
>
> We saw a MR deadlock recently:
> - When NM restarted by framework without enable recovery, containers running 
> on these nodes will be identified as "ABORTED", and MR AM will try to 
> reschedule "ABORTED" mapper containers.
> - Since such lost mappers are "ABORTED" container, MR AM gives normal mapper 
> priority (priority=20) to such mapper requests. If there's any pending 
> reducer (priority=10) at the same time, mapper requests need to wait for 
> reducer requests satisfied.
> - In our test, one mapper needs 700+ MB, reducer needs 1000+ MB, and RM 
> available resource = mapper-request = (700+ MB), only one job was running in 
> the system so scheduler cannot allocate more reducer containers AND MR-AM 
> thinks there're enough headroom for mapper so reducer containers will not be 
> preempted.
> MAPREDUCE-6302 can solve most of the problems, but in the other hand, I think 
> we may need to exclude scheduled reducers resource when calculating 
> #available-mapper-slots from headroom. Which we can avoid excessive reducer 
> preemption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6734) Add option to distcp to preserve file path structure of source files at the destination

2016-08-17 Thread Frederick Tucker (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frederick Tucker updated MAPREDUCE-6734:

Hadoop Flags: Reviewed

> Add option to distcp to preserve file path structure of source files at the 
> destination
> ---
>
> Key: MAPREDUCE-6734
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6734
> Project: Hadoop Map/Reduce
>  Issue Type: Improvement
>  Components: distcp
>Affects Versions: 3.0.0-alpha2
> Environment: Software platform
>Reporter: Frederick Tucker
>Priority: Critical
>  Labels: distcp, newbie, patch
> Fix For: 3.0.0-alpha2
>
> Attachments: MAPREDUCE-6734.3.0.0-alpha2.patch, 
> MAPREDUCE-6734.3.0.0-alpha2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When copying files using distcp with globbed source files, all the matched 
> files in the glob are copied in a single flat directory.  This causes 
> problems when the file structure at the source is important.  It also is an 
> issue when there are two files matched in the glob with the same name because 
> it causes a duplicate file error at the target.  I'd like to have an option 
> to preserve the file structure of the source files when globbing inputs.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425448#comment-15425448
 ] 

Hadoop QA commented on MAPREDUCE-6740:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
32s {color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client: 
The patch generated 0 new + 723 unchanged - 3 fixed = 723 total (was 726) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s 
{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 44s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824239/mapreduce6740.006.patch
 |
| JIRA Issue | MAPREDUCE-6740 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f6c324e21573 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8693936 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6675/testReport/ |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app U: 
hadoop-mapreduce-project/hadoop-mapreduce-client |
| Console output | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6675/console |
| Powered by 

[jira] [Updated] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-08-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated MAPREDUCE-6740:
--
Attachment: mapreduce6740.006.patch

Attaching a new patch to address the unit test failure and java doc warnings.

> Enforce mapreduce.task.timeout to be at least 
> mapreduce.task.progress-report.interval
> -
>
> Key: MAPREDUCE-6740
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6740
> Project: Hadoop Map/Reduce
>  Issue Type: Bug
>  Components: mr-am
>Affects Versions: 2.8.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Minor
> Attachments: mapreduce6740.001.patch, mapreduce6740.002.patch, 
> mapreduce6740.003.patch, mapreduce6740.004.patch, mapreduce6740.005.patch, 
> mapreduce6740.006.patch
>
>
> Mapreduce-6242 makes task status update interval configurable to ease the 
> pressure on MR AM to process status updates, but it did not ensure that 
> mapreduce.task.timeout is no smaller than the configured value of task report 
> interval. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6690) Limit the number of resources a single map reduce job can submit for localization

2016-08-17 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15425190#comment-15425190
 ] 

Chris Trezzo commented on MAPREDUCE-6690:
-

Thanks [~jlowe] for the review and commit!

> Limit the number of resources a single map reduce job can submit for 
> localization
> -
>
> Key: MAPREDUCE-6690
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6690
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0
>
> Attachments: MAPREDUCE-6690-trunk-v1.patch, 
> MAPREDUCE-6690-trunk-v2.patch, MAPREDUCE-6690-trunk-v3.patch, 
> MAPREDUCE-6690-trunk-v4.patch, MAPREDUCE-6690-trunk-v5.patch, 
> MAPREDUCE-6690-trunk-v6.patch, MAPREDUCE-6690-trunk-v7.patch
>
>
> Users will sometimes submit a large amount of resources to be localized as 
> part of a single map reduce job. This can cause issues with YARN localization 
> that destabilize the cluster and potentially impact other user jobs. These 
> resources are specified via the files, libjars, archives and jobjar command 
> line arguments or directly through the configuration (i.e. distributed cache 
> api). The resources specified could be too large in multiple dimensions:
> # Total size
> # Number of files
> # Size of an individual resource (i.e. a large fat jar)
> We would like to encourage good behavior on the client side by having the 
> option of enforcing resource limits along the above dimensions.
> There should be a separate effort to enforce limits at the YARN layer on the 
> server side, but this jira is only covering the map reduce layer on the 
> client side. In practice, having these client side limits will get us a long 
> way towards preventing these localization anti-patterns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Created] (MAPREDUCE-6759) JobSubmitter/JobResourceUploader should parallelize upload of -libjars, -files, -archives

2016-08-17 Thread Dennis Huo (JIRA)
Dennis Huo created MAPREDUCE-6759:
-

 Summary: JobSubmitter/JobResourceUploader should parallelize 
upload of -libjars, -files, -archives
 Key: MAPREDUCE-6759
 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6759
 Project: Hadoop Map/Reduce
  Issue Type: Improvement
  Components: job submission
Reporter: Dennis Huo


During job submission, the {{JobResourceUploader}} currently iterates over 
for-loops of {{-libjars}}, {{-files}}, and {{-archives}} sequentially, which 
can significantly slow down job startup time when a large number of files need 
to be uploaded, especially if staging the files to a cloud object-store based 
FileSystem implementation like S3, GCS, WABS, etc., where round-trip latencies 
may be higher than HDFS despite having good throughput when parallelized:

{code:title=JobResourceUploader.java}
if (files != null) {
  FileSystem.mkdirs(jtFs, filesDir, mapredSysPerms);
  String[] fileArr = files.split(",");
  for (String tmpFile : fileArr) {
URI tmpURI = null;
try {
  tmpURI = new URI(tmpFile);
} catch (URISyntaxException e) {
  throw new IllegalArgumentException(e);
}
Path tmp = new Path(tmpURI);
Path newPath = copyRemoteFiles(filesDir, tmp, conf, replication);
try {
  URI pathURI = getPathURI(newPath, tmpURI.getFragment());
  DistributedCache.addCacheFile(pathURI, conf);
} catch (URISyntaxException ue) {
  // should not throw a uri exception
  throw new IOException("Failed to create uri for " + tmpFile, ue);
}
  }
}

if (libjars != null) {
  FileSystem.mkdirs(jtFs, libjarsDir, mapredSysPerms);
  String[] libjarsArr = libjars.split(",");
  for (String tmpjars : libjarsArr) {
Path tmp = new Path(tmpjars);
Path newPath = copyRemoteFiles(libjarsDir, tmp, conf, replication);
DistributedCache.addFileToClassPath(
new Path(newPath.toUri().getPath()), conf, jtFs);
  }
}

if (archives != null) {
  FileSystem.mkdirs(jtFs, archivesDir, mapredSysPerms);
  String[] archivesArr = archives.split(",");
  for (String tmpArchives : archivesArr) {
URI tmpURI;
try {
  tmpURI = new URI(tmpArchives);
} catch (URISyntaxException e) {
  throw new IllegalArgumentException(e);
}
Path tmp = new Path(tmpURI);
Path newPath = copyRemoteFiles(archivesDir, tmp, conf, replication);
try {
  URI pathURI = getPathURI(newPath, tmpURI.getFragment());
  DistributedCache.addCacheArchive(pathURI, conf);
} catch (URISyntaxException ue) {
  // should not throw an uri excpetion
  throw new IOException("Failed to create uri for " + tmpArchives, ue);
}
  }
}
{code}

Parallelizing the upload of these files would improve job submission time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6690) Limit the number of resources a single map reduce job can submit for localization

2016-08-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15424864#comment-15424864
 ] 

Hudson commented on MAPREDUCE-6690:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10291 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10291/])
MAPREDUCE-6690. Limit the number of resources a single map reduce job (jlowe: 
rev f80a7298325a4626638ee24467e2012442e480d4)
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/ClientDistributedCacheManager.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/TestMRJobs.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobResourceUploader.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
* (add) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/TestJobResourceUploader.java


> Limit the number of resources a single map reduce job can submit for 
> localization
> -
>
> Key: MAPREDUCE-6690
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6690
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0
>
> Attachments: MAPREDUCE-6690-trunk-v1.patch, 
> MAPREDUCE-6690-trunk-v2.patch, MAPREDUCE-6690-trunk-v3.patch, 
> MAPREDUCE-6690-trunk-v4.patch, MAPREDUCE-6690-trunk-v5.patch, 
> MAPREDUCE-6690-trunk-v6.patch, MAPREDUCE-6690-trunk-v7.patch
>
>
> Users will sometimes submit a large amount of resources to be localized as 
> part of a single map reduce job. This can cause issues with YARN localization 
> that destabilize the cluster and potentially impact other user jobs. These 
> resources are specified via the files, libjars, archives and jobjar command 
> line arguments or directly through the configuration (i.e. distributed cache 
> api). The resources specified could be too large in multiple dimensions:
> # Total size
> # Number of files
> # Size of an individual resource (i.e. a large fat jar)
> We would like to encourage good behavior on the client side by having the 
> option of enforcing resource limits along the above dimensions.
> There should be a separate effort to enforce limits at the YARN layer on the 
> server side, but this jira is only covering the map reduce layer on the 
> client side. In practice, having these client side limits will get us a long 
> way towards preventing these localization anti-patterns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Updated] (MAPREDUCE-6690) Limit the number of resources a single map reduce job can submit for localization

2016-08-17 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAPREDUCE-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated MAPREDUCE-6690:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks, Chris!  I committed this to trunk and branch-2.

> Limit the number of resources a single map reduce job can submit for 
> localization
> -
>
> Key: MAPREDUCE-6690
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6690
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 2.9.0
>
> Attachments: MAPREDUCE-6690-trunk-v1.patch, 
> MAPREDUCE-6690-trunk-v2.patch, MAPREDUCE-6690-trunk-v3.patch, 
> MAPREDUCE-6690-trunk-v4.patch, MAPREDUCE-6690-trunk-v5.patch, 
> MAPREDUCE-6690-trunk-v6.patch, MAPREDUCE-6690-trunk-v7.patch
>
>
> Users will sometimes submit a large amount of resources to be localized as 
> part of a single map reduce job. This can cause issues with YARN localization 
> that destabilize the cluster and potentially impact other user jobs. These 
> resources are specified via the files, libjars, archives and jobjar command 
> line arguments or directly through the configuration (i.e. distributed cache 
> api). The resources specified could be too large in multiple dimensions:
> # Total size
> # Number of files
> # Size of an individual resource (i.e. a large fat jar)
> We would like to encourage good behavior on the client side by having the 
> option of enforcing resource limits along the above dimensions.
> There should be a separate effort to enforce limits at the YARN layer on the 
> server side, but this jira is only covering the map reduce layer on the 
> client side. In practice, having these client side limits will get us a long 
> way towards preventing these localization anti-patterns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6690) Limit the number of resources a single map reduce job can submit for localization

2016-08-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15424828#comment-15424828
 ] 

Jason Lowe commented on MAPREDUCE-6690:
---

+1 lgtm.  Committing this.

> Limit the number of resources a single map reduce job can submit for 
> localization
> -
>
> Key: MAPREDUCE-6690
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6690
> Project: Hadoop Map/Reduce
>  Issue Type: New Feature
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: MAPREDUCE-6690-trunk-v1.patch, 
> MAPREDUCE-6690-trunk-v2.patch, MAPREDUCE-6690-trunk-v3.patch, 
> MAPREDUCE-6690-trunk-v4.patch, MAPREDUCE-6690-trunk-v5.patch, 
> MAPREDUCE-6690-trunk-v6.patch, MAPREDUCE-6690-trunk-v7.patch
>
>
> Users will sometimes submit a large amount of resources to be localized as 
> part of a single map reduce job. This can cause issues with YARN localization 
> that destabilize the cluster and potentially impact other user jobs. These 
> resources are specified via the files, libjars, archives and jobjar command 
> line arguments or directly through the configuration (i.e. distributed cache 
> api). The resources specified could be too large in multiple dimensions:
> # Total size
> # Number of files
> # Size of an individual resource (i.e. a large fat jar)
> We would like to encourage good behavior on the client side by having the 
> option of enforcing resource limits along the above dimensions.
> There should be a separate effort to enforce limits at the YARN layer on the 
> server side, but this jira is only covering the map reduce layer on the 
> client side. In practice, having these client side limits will get us a long 
> way towards preventing these localization anti-patterns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org



[jira] [Commented] (MAPREDUCE-6740) Enforce mapreduce.task.timeout to be at least mapreduce.task.progress-report.interval

2016-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15423980#comment-15423980
 ] 

Hadoop QA commented on MAPREDUCE-6740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 4s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} hadoop-mapreduce-project/hadoop-mapreduce-client: 
The patch generated 0 new + 723 unchanged - 3 fixed = 723 total (was 726) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core 
generated 2 new + 2508 unchanged - 0 fixed = 2510 total (was 2508) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 13s {color} 
| {color:red} hadoop-mapreduce-client-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.mapred.TestTaskProgressReporter |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12824079/mapreduce6740.005.patch
 |
| JIRA Issue | MAPREDUCE-6740 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 78e144d1b676 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2353271 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/6674/artifact/patchprocess/diff-javadoc-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
 |
| unit |