[jira] [Commented] (MAPREDUCE-5870) Support for passing Job priority through Application Submission Context in Mapreduce Side
[ https://issues.apache.org/jira/browse/MAPREDUCE-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984442#comment-14984442 ] Hadoop QA commented on MAPREDUCE-5870: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s {color} | {color:blue} docker + precommit patch detected. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 7 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 6s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s {color} | {color:green} trunk passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s {color} | {color:green} trunk passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s {color} | {color:red} Patch generated 10 new checkstyle issues in hadoop-mapreduce-project/hadoop-mapreduce-client (total was 380, now 385). {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s {color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s {color} | {color:green} the patch passed with JDK v1.8.0_60 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s {color} | {color:green} the patch passed with JDK v1.7.0_79 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 59s {color} | {color:green} hadoop-mapreduce-client-app in the patch passed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s {color} | {color:green} hadoop-mapreduce-client-common in the patch passed with JDK v1.8.0_60. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 33s {color} | {color:red} hadoop-mapreduce-client-core in the patch failed with JDK v1.8.0_60. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 105m 49s {color} | {color:red} hadoop-mapreduce-client-jobclient in the patch failed with JDK v1.8.0_60. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 8s {color} | {color:green} hadoop-mapreduce-client-app in the patch passed with JDK v1.7.0_79. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s {color} | {color:green} hadoop-mapreduce-client-common in the patch passed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 0s {color} | {color:red} hadoop-mapreduce-client-core in the patch failed with JDK v1.7.0_79. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 24s {color} | {color:red} hadoop-mapreduce-clien
[jira] [Commented] (MAPREDUCE-5889) Deprecate FileInputFormat.setInputPaths(Job, String) and FileInputFormat.addInputPaths(Job, String)
[ https://issues.apache.org/jira/browse/MAPREDUCE-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984414#comment-14984414 ] Tsuyoshi Ozawa commented on MAPREDUCE-5889: --- Kicking Jenkins CI. > Deprecate FileInputFormat.setInputPaths(Job, String) and > FileInputFormat.addInputPaths(Job, String) > --- > > Key: MAPREDUCE-5889 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5889 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Reporter: Akira AJISAKA >Assignee: Akira AJISAKA >Priority: Minor > Labels: BB2015-05-TBR, newbie > Attachments: MAPREDUCE-5889.3.patch, MAPREDUCE-5889.patch, > MAPREDUCE-5889.patch > > > {{FileInputFormat.setInputPaths(Job job, String commaSeparatedPaths)}} and > {{FileInputFormat.addInputPaths(Job job, String commaSeparatedPaths)}} fail > to parse commaSeparatedPaths if a comma is included in the file path. (e.g. > Path: {{/path/file,with,comma}}) > We should deprecate these methods and document to use {{setInputPaths(Job > job, Path... inputPaths)}} and {{addInputPaths(Job job, Path... inputPaths)}} > instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6513) MR job got hanged forever when one NM unstable for some time
[ https://issues.apache.org/jira/browse/MAPREDUCE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14984385#comment-14984385 ] Varun Saxena commented on MAPREDUCE-6513: - [~vinodkv], attaching an initial patch. Kindly review. This patch primarily does the following : # When an unusable node is reported, task attempt kill events are sent for completed and running map tasks which ran on the node. A flag has been added in this event to indicate whether next task attempt will be rescheduled(scheduled with higher priority of 5). On unusable node it has been marked to be rescheduled. If a task attempt is killed by client, it will not be rescheduled with higher priority. I am not a 100% convinced if user initiated kill should lead to a higher priority. Your thoughts on this ? # Anyways, this rescheduled flag is then forwarded to Tasklmpl in attempt killed event after killing of the attempt is complete. # Based on this flag task will then create a new attempt and send a TA_RESCHEDULE or TA_SCHEDULE event on processing attempt kill event. As it is a kill event, its not counted towards failed attempt. Anyways. if attempt has to be rescheduled, TaskAttemptImpl will send a container request event to RMContainerAllocator. From here on, this will be treated like a failed map and hence priority will be 5. Like for failed maps, node or rack locality is not ensured. Node locality anyways cannot be ensured till node comes up. # As on recovery, we only consider SUCCESSFUL tasks, I think we need not update this flag in history file. > MR job got hanged forever when one NM unstable for some time > > > Key: MAPREDUCE-6513 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6513 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: applicationmaster, resourcemanager >Affects Versions: 2.7.0 >Reporter: Bob >Assignee: Varun Saxena >Priority: Critical > Attachments: MAPREDUCE-6513.01.patch > > > when job is in-progress which is having more tasks,one node became unstable > due to some OS issue.After the node became unstable, the map on this node > status changed to KILLED state. > Currently maps which were running on unstable node are rescheduled, and all > are in scheduled state and wait for RM assign container.Seen ask requests for > map till Node is good (all those failed), there are no ask request after > this. But AM keeps on preempting the reducers (it's recycling). > Finally reducers are waiting for complete mappers and mappers did n't get > container.. > My Question Is: > > why map requests did not sent AM ,once after node recovery.? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6513) MR job got hanged forever when one NM unstable for some time
[ https://issues.apache.org/jira/browse/MAPREDUCE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated MAPREDUCE-6513: Attachment: MAPREDUCE-6513.01.patch > MR job got hanged forever when one NM unstable for some time > > > Key: MAPREDUCE-6513 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6513 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: applicationmaster, resourcemanager >Affects Versions: 2.7.0 >Reporter: Bob >Assignee: Varun Saxena >Priority: Critical > Attachments: MAPREDUCE-6513.01.patch > > > when job is in-progress which is having more tasks,one node became unstable > due to some OS issue.After the node became unstable, the map on this node > status changed to KILLED state. > Currently maps which were running on unstable node are rescheduled, and all > are in scheduled state and wait for RM assign container.Seen ask requests for > map till Node is good (all those failed), there are no ask request after > this. But AM keeps on preempting the reducers (it's recycling). > Finally reducers are waiting for complete mappers and mappers did n't get > container.. > My Question Is: > > why map requests did not sent AM ,once after node recovery.? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6513) MR job got hanged forever when one NM unstable for some time
[ https://issues.apache.org/jira/browse/MAPREDUCE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated MAPREDUCE-6513: Status: Patch Available (was: Open) > MR job got hanged forever when one NM unstable for some time > > > Key: MAPREDUCE-6513 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6513 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: applicationmaster, resourcemanager >Affects Versions: 2.7.0 >Reporter: Bob >Assignee: Varun Saxena >Priority: Critical > Attachments: MAPREDUCE-6513.01.patch > > > when job is in-progress which is having more tasks,one node became unstable > due to some OS issue.After the node became unstable, the map on this node > status changed to KILLED state. > Currently maps which were running on unstable node are rescheduled, and all > are in scheduled state and wait for RM assign container.Seen ask requests for > map till Node is good (all those failed), there are no ask request after > this. But AM keeps on preempting the reducers (it's recycling). > Finally reducers are waiting for complete mappers and mappers did n't get > container.. > My Question Is: > > why map requests did not sent AM ,once after node recovery.? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5870) Support for passing Job priority through Application Submission Context in Mapreduce Side
[ https://issues.apache.org/jira/browse/MAPREDUCE-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil G updated MAPREDUCE-5870: --- Attachment: 0008-MAPREDUCE-5870.patch Attaching an updated patch correcting test failures. > Support for passing Job priority through Application Submission Context in > Mapreduce Side > - > > Key: MAPREDUCE-5870 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5870 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: client >Reporter: Sunil G >Assignee: Sunil G > Attachments: 0001-MAPREDUCE-5870.patch, 0002-MAPREDUCE-5870.patch, > 0003-MAPREDUCE-5870.patch, 0004-MAPREDUCE-5870.patch, > 0005-MAPREDUCE-5870.patch, 0006-MAPREDUCE-5870.patch, > 0007-MAPREDUCE-5870.patch, 0008-MAPREDUCE-5870.patch, Yarn-2002.1.patch > > > Job Prioirty can be set from client side as below [Configuration and api]. > a. JobConf.getJobPriority() and > Job.setPriority(JobPriority priority) > b. We can also use configuration > "mapreduce.job.priority". > Now this Job priority can be passed in Application Submission > context from Client side. > Here we can reuse the MRJobConfig.PRIORITY configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6513) MR job got hanged forever when one NM unstable for some time
[ https://issues.apache.org/jira/browse/MAPREDUCE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated MAPREDUCE-6513: Attachment: (was: MAPREDUCE-6513.01.patch) > MR job got hanged forever when one NM unstable for some time > > > Key: MAPREDUCE-6513 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6513 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: applicationmaster, resourcemanager >Affects Versions: 2.7.0 >Reporter: Bob >Assignee: Varun Saxena >Priority: Critical > > when job is in-progress which is having more tasks,one node became unstable > due to some OS issue.After the node became unstable, the map on this node > status changed to KILLED state. > Currently maps which were running on unstable node are rescheduled, and all > are in scheduled state and wait for RM assign container.Seen ask requests for > map till Node is good (all those failed), there are no ask request after > this. But AM keeps on preempting the reducers (it's recycling). > Finally reducers are waiting for complete mappers and mappers did n't get > container.. > My Question Is: > > why map requests did not sent AM ,once after node recovery.? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-6513) MR job got hanged forever when one NM unstable for some time
[ https://issues.apache.org/jira/browse/MAPREDUCE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Saxena updated MAPREDUCE-6513: Attachment: MAPREDUCE-6513.01.patch > MR job got hanged forever when one NM unstable for some time > > > Key: MAPREDUCE-6513 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6513 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: applicationmaster, resourcemanager >Affects Versions: 2.7.0 >Reporter: Bob >Assignee: Varun Saxena >Priority: Critical > Attachments: MAPREDUCE-6513.01.patch > > > when job is in-progress which is having more tasks,one node became unstable > due to some OS issue.After the node became unstable, the map on this node > status changed to KILLED state. > Currently maps which were running on unstable node are rescheduled, and all > are in scheduled state and wait for RM assign container.Seen ask requests for > map till Node is good (all those failed), there are no ask request after > this. But AM keeps on preempting the reducers (it's recycling). > Finally reducers are waiting for complete mappers and mappers did n't get > container.. > My Question Is: > > why map requests did not sent AM ,once after node recovery.? -- This message was sent by Atlassian JIRA (v6.3.4#6332)