[jira] [Resolved] (MAPREDUCE-7352) ArithmeticException in some MapReduce tests
[ https://issues.apache.org/jira/browse/MAPREDUCE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko resolved MAPREDUCE-7352. - Resolution: Duplicate > ArithmeticException in some MapReduce tests > --- > > Key: MAPREDUCE-7352 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7352 > Project: Hadoop Map/Reduce > Issue Type: Task > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > There are some ArithmeticException failures in certain MapReduce test cases, > for example: > {noformat} > 2021-06-14 14:14:20,078 INFO [main] service.AbstractService > (AbstractService.java:noteFailure(267)) - Service > org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED > java.lang.ArithmeticException: / by zero > at > org.apache.hadoop.yarn.event.AsyncDispatcher$GenericEventHandler.handle(AsyncDispatcher.java:304) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:1015) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:141) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1544) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1263) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301) > at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285) > at > org.apache.hadoop.mapreduce.v2.app.TestMRApp.testUpdatedNodes(TestMRApp.java:223) > {noformat} > We have to set {{detailsInterval}} when the async dispatcher is spied. For > some reason, despite the fact that {{serviceInit()}} is called, this variable > remains zero. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7352) ArithmeticException in some MapReduce tests
[ https://issues.apache.org/jira/browse/MAPREDUCE-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17362939#comment-17362939 ] Peter Bacsko commented on MAPREDUCE-7352: - OK, just seen that this was fixed a long time ago. I was on the wrong branch. > ArithmeticException in some MapReduce tests > --- > > Key: MAPREDUCE-7352 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7352 > Project: Hadoop Map/Reduce > Issue Type: Task > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > There are some ArithmeticException failures in certain MapReduce test cases, > for example: > {noformat} > 2021-06-14 14:14:20,078 INFO [main] service.AbstractService > (AbstractService.java:noteFailure(267)) - Service > org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED > java.lang.ArithmeticException: / by zero > at > org.apache.hadoop.yarn.event.AsyncDispatcher$GenericEventHandler.handle(AsyncDispatcher.java:304) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:1015) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:141) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1544) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1263) > at > org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) > at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301) > at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285) > at > org.apache.hadoop.mapreduce.v2.app.TestMRApp.testUpdatedNodes(TestMRApp.java:223) > {noformat} > We have to set {{detailsInterval}} when the async dispatcher is spied. For > some reason, despite the fact that {{serviceInit()}} is called, this variable > remains zero. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Created] (MAPREDUCE-7352) ArithmeticException in some MapReduce tests
Peter Bacsko created MAPREDUCE-7352: --- Summary: ArithmeticException in some MapReduce tests Key: MAPREDUCE-7352 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7352 Project: Hadoop Map/Reduce Issue Type: Task Components: test Reporter: Peter Bacsko Assignee: Peter Bacsko There are some ArithmeticException failures in certain MapReduce test cases, for example: {noformat} 2021-06-14 14:14:20,078 INFO [main] service.AbstractService (AbstractService.java:noteFailure(267)) - Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state STARTED java.lang.ArithmeticException: / by zero at org.apache.hadoop.yarn.event.AsyncDispatcher$GenericEventHandler.handle(AsyncDispatcher.java:304) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:1015) at org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.handle(JobImpl.java:141) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher.handle(MRAppMaster.java:1544) at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1263) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:301) at org.apache.hadoop.mapreduce.v2.app.MRApp.submit(MRApp.java:285) at org.apache.hadoop.mapreduce.v2.app.TestMRApp.testUpdatedNodes(TestMRApp.java:223) {noformat} We have to set {{detailsInterval}} when the async dispatcher is spied. For some reason, despite the fact that {{serviceInit()}} is called, this variable remains zero. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Assigned] (MAPREDUCE-7333) SecureShuffleUtils.toHex(byte[]) creates malformed hex string
[ https://issues.apache.org/jira/browse/MAPREDUCE-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko reassigned MAPREDUCE-7333: --- Assignee: Peter Bacsko > SecureShuffleUtils.toHex(byte[]) creates malformed hex string > - > > Key: MAPREDUCE-7333 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7333 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 3.2.2 >Reporter: Marcono1234 >Assignee: Peter Bacsko >Priority: Major > > {{org.apache.hadoop.mapreduce.security.SecureShuffleUtils.toHex(byte[])}} > creates malformed hex strings: > {code} > for (byte b : ba) { > ps.printf("%x", b); > } > {code} > The pattern {{"%x"}} would for bytes < 16 only have on hex char and for > example both {{1, 0}} and {{16}} would have the result {{"10"}}. > A correct (and more efficient) implementation would be: > {code} > public static String toHex(byte[] ba) { > StringBuilder sb = new StringBuilder(ba.length * 2); > for (byte b : ba) { > int unsignedB = b & 0xFF; > if (unsignedB < 16) { > sb.append('0'); > } > sb.append(Integer.toHexString(unsignedB)); > } > return sb.toString(); > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7333) SecureShuffleUtils.toHex(byte[]) creates malformed hex string
[ https://issues.apache.org/jira/browse/MAPREDUCE-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313412#comment-17313412 ] Peter Bacsko commented on MAPREDUCE-7333: - Yeah, this was just a quick POC that I copied from Eclipse. Both ideas make sense to me. cc [~aajisaka] [~ahussein] > SecureShuffleUtils.toHex(byte[]) creates malformed hex string > - > > Key: MAPREDUCE-7333 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7333 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 3.2.2 >Reporter: Marcono1234 >Priority: Major > > {{org.apache.hadoop.mapreduce.security.SecureShuffleUtils.toHex(byte[])}} > creates malformed hex strings: > {code} > for (byte b : ba) { > ps.printf("%x", b); > } > {code} > The pattern {{"%x"}} would for bytes < 16 only have on hex char and for > example both {{1, 0}} and {{16}} would have the result {{"10"}}. > A correct (and more efficient) implementation would be: > {code} > public static String toHex(byte[] ba) { > StringBuilder sb = new StringBuilder(ba.length * 2); > for (byte b : ba) { > int unsignedB = b & 0xFF; > if (unsignedB < 16) { > sb.append('0'); > } > sb.append(Integer.toHexString(unsignedB)); > } > return sb.toString(); > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7333) SecureShuffleUtils.toHex(byte[]) creates malformed hex string
[ https://issues.apache.org/jira/browse/MAPREDUCE-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313260#comment-17313260 ] Peter Bacsko commented on MAPREDUCE-7333: - [~Marcono1234] this is certainly an interesting observation. What about this implementation? Maybe a bit too contrived, but only uses array lookups: {noformat} public static String toHex(byte[] ba) { String[] hexChars = new String[] { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "a", "b", "c", "d", "e", "f", }; StringBuilder sb = new StringBuilder(ba.length * 2); for (byte b : ba) { int high = (b & 0xf0) >> 4; int low = b & 0x0f; sb.append(hexChars[high]); sb.append(hexChars[low]); } return sb.toString(); } {noformat} > SecureShuffleUtils.toHex(byte[]) creates malformed hex string > - > > Key: MAPREDUCE-7333 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7333 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 3.2.2 >Reporter: Marcono1234 >Priority: Major > > {{org.apache.hadoop.mapreduce.security.SecureShuffleUtils.toHex(byte[])}} > creates malformed hex strings: > {code} > for (byte b : ba) { > ps.printf("%x", b); > } > {code} > The pattern {{"%x"}} would for bytes < 16 only have on hex char and for > example both {{1, 0}} and {{16}} would have the result {{"10"}}. > A correct (and more efficient) implementation would be: > {code} > public static String toHex(byte[] ba) { > StringBuilder sb = new StringBuilder(ba.length * 2); > for (byte b : ba) { > int unsignedB = b & 0xFF; > if (unsignedB < 16) { > sb.append('0'); > } > sb.append(Integer.toHexString(unsignedB)); > } > return sb.toString(); > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Description: This is an issue could affect all the releases which includes YARN-6927. Basically, we use regex match repeatedly when we read mapper/reducer resource request from config files. When we have large config file, and large number of splits, it could take a long time. We saw AM could take hours to parse config when we have 200k+ splits, with a large config file (hundreds of kbs). The problematic part is this: {noformat} private void populateResourceCapability(TaskType taskType) { String resourceTypePrefix = getResourceTypePrefix(taskType); boolean memorySet = false; boolean cpuVcoresSet = false; if (resourceTypePrefix != null) { List resourceRequests = ResourceUtils.getRequestedResourcesFromConfig(conf, resourceTypePrefix); {noformat} Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call {{Configuration.getValByRegex()}} which goes through all property keys that come from the MapReduce job configuration (jobconf.xml). If the job config is large (eg. due to being part of an MR pipeline and it was populated by an earlier job), then this results in running a regexp match unnecessarily for all properties over and over again. This is not necessary, because all mappers and reducers will have the same config, respectively. We should do proper caching for pre-configured resource requests. was: This is an issue could affect all the releases which includes YARN-6927. Basically, we use regex match repeatedly when we read mapper/reducer resource request from config files. When we have large config file, and large number of splits, it could take a long time. We saw AM could take hours to parse config when we have 200k+ splits, with a large config file (hundreds of kbs). The problamtic part is this: {noformat} private void populateResourceCapability(TaskType taskType) { String resourceTypePrefix = getResourceTypePrefix(taskType); boolean memorySet = false; boolean cpuVcoresSet = false; if (resourceTypePrefix != null) { List resourceRequests = ResourceUtils.getRequestedResourcesFromConfig(conf, resourceTypePrefix); {noformat} Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call {{Configuration.getValByRegex()}} which goes through all property keys that come from the MapReduce job configuration (jobconf.xml). If the job config is large (eg. due to being part of an MR pipeline and it was populated by an earlier job), then this results in running a regexp match unnecessarily for all properties over and over again. This is not necessary, because all mappers and reducers will have the same config, respectively. We should do proper caching for pre-configured resource requests. > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.2.2, 3.4.0, 3.1.5, 3.3.1 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problematic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.2.2, 3.4.0, 3.1.5, 3.3.1 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17238670#comment-17238670 ] Peter Bacsko commented on MAPREDUCE-7309: - Thanks for the review [~snemeth], I committed this to the remaining branches. > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.2.2, 3.4.0, 3.1.5, 3.3.1 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Fix Version/s: 3.3.1 3.1.5 3.2.2 > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.2.2, 3.4.0, 3.1.5, 3.3.1 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: (was: MAPREDUCE-7309-branch-3.3-001.patch) > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-branch-3.3-001.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17238383#comment-17238383 ] Peter Bacsko commented on MAPREDUCE-7309: - Ok, re-uploading branch-3.2 and branch-3.3 patches because all Yetus ran against branch-3.1 > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 54s{color} | | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} codespell {color} | {color:blue} 0m 0s{color} | | {color:blue} codespell was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-3.1 Compile Tests {color} || || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 9m 37s{color} | [/branch-mvninstall-root.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/branch-mvninstall-root.txt] | {color:red} root in branch-3.1 failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 26s{color} | [/branch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/branch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt] | {color:red} hadoop-mapreduce-client-app in branch-3.1 failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | [/buildtool-branch-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/buildtool-branch-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt] | {color:orange} The patch fails to run checkstyle in hadoop-mapreduce-client-app {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 13s{color} | [/branch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/branch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt] | {color:red} hadoop-mapreduce-client-app in branch-3.1 failed. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 55s{color} | [/branch-shadedclient.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/branch-shadedclient.txt] | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s{color} | [/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt] | {color:red} hadoop-mapreduce-client-app in branch-3.1 failed. {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 27s{color} | | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s{color} | [/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt] | {color:red} hadoop-mapreduce-client-app in branch-3.1 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s{color} | [/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt|https://ci-hadoop.apache.org/job/PreCommit-MAPREDUCE-Build/41/artifact/out/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt] | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 12s{color} |
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: (was: MAPREDUCE-7309-branch-3.2-001.patch) > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-branch-3.2-001.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job), then this results in running a regexp match unnecessarily for > all properties over and over again. This is not necessary, because all > mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Description: This is an issue could affect all the releases which includes YARN-6927. Basically, we use regex match repeatedly when we read mapper/reducer resource request from config files. When we have large config file, and large number of splits, it could take a long time. We saw AM could take hours to parse config when we have 200k+ splits, with a large config file (hundreds of kbs). The problamtic part is this: {noformat} private void populateResourceCapability(TaskType taskType) { String resourceTypePrefix = getResourceTypePrefix(taskType); boolean memorySet = false; boolean cpuVcoresSet = false; if (resourceTypePrefix != null) { List resourceRequests = ResourceUtils.getRequestedResourcesFromConfig(conf, resourceTypePrefix); {noformat} Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call {{Configuration.getValByRegex()}} which goes through all property keys that come from the MapReduce job configuration (jobconf.xml). If the job config is large (eg. due to being part of an MR pipeline and it was populated by an earlier job), then this results in running a regexp match unnecessarily for all properties over and over again. This is not necessary, because all mappers and reducers will have the same config, respectively. We should do proper caching for pre-configured resource requests. was: This is an issue could affect all the releases which includes YARN-6927. Basically, we use regex match repeatedly when we read mapper/reducer resource request from config files. When we have large config file, and large number of splits, it could take a long time. We saw AM could take hours to parse config when we have 200k+ splits, with a large config file (hundreds of kbs). The problamtic part is this: {noformat} private void populateResourceCapability(TaskType taskType) { String resourceTypePrefix = getResourceTypePrefix(taskType); boolean memorySet = false; boolean cpuVcoresSet = false; if (resourceTypePrefix != null) { List resourceRequests = ResourceUtils.getRequestedResourcesFromConfig(conf, resourceTypePrefix); {noformat} Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call {{Configuration.getValByRegex()}} which goes through all property keys that come from the MapReduce job configuration (jobconf.xml). If the job config is large (eg. due to being part of an MR pipeline and it was populated by an earlier job in the stage), then this results in running a regexp match unnecessarily for all properties over and over again. This is not necessary, because all mappers and reducers will have the same config, respectively. We should do proper caching for pre-configured resource requests. > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Description: This is an issue could affect all the releases which includes YARN-6927. Basically, we use regex match repeatedly when we read mapper/reducer resource request from config files. When we have large config file, and large number of splits, it could take a long time. We saw AM could take hours to parse config when we have 200k+ splits, with a large config file (hundreds of kbs). The problamtic part is this: {noformat} private void populateResourceCapability(TaskType taskType) { String resourceTypePrefix = getResourceTypePrefix(taskType); boolean memorySet = false; boolean cpuVcoresSet = false; if (resourceTypePrefix != null) { List resourceRequests = ResourceUtils.getRequestedResourcesFromConfig(conf, resourceTypePrefix); {noformat} Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call {{Configuration.getValByRegex()}} which goes through all property keys that come from the MapReduce job configuration (jobconf.xml). If the job config is large (eg. due to being part of an MR pipeline and it was populated by an earlier job in the stage), then this results in running a regexp match unnecessarily for all properties over and over again. This is not necessary, because all mappers and reducers will have the same config, respectively. We should do proper caching for pre-configured resource requests. was: This is an issue could affect all the releases which includes YARN-6927. Basically, we use regex match repeatedly when we read mapper/reducer resource request from config files. When we have large config file, and large number of splits, it could take a long time. We saw AM could take hours to parse config when we have 200k+ splits, with a large config file (hundreds of kbs). We should do proper caching for pre-configured resource requests. > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > The problamtic part is this: > {noformat} > private void populateResourceCapability(TaskType taskType) { > String resourceTypePrefix = > getResourceTypePrefix(taskType); > boolean memorySet = false; > boolean cpuVcoresSet = false; > if (resourceTypePrefix != null) { > List resourceRequests = > ResourceUtils.getRequestedResourcesFromConfig(conf, > resourceTypePrefix); > {noformat} > Inside {{ResourceUtils.getRequestedResourcesFromConfig()}}, we call > {{Configuration.getValByRegex()}} which goes through all property keys that > come from the MapReduce job configuration (jobconf.xml). If the job config is > large (eg. due to being part of an MR pipeline and it was populated by an > earlier job in the stage), then this results in running a regexp match > unnecessarily for all properties over and over again. This is not necessary, > because all mappers and reducers will have the same config, respectively. > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-branch-3.1-001.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.1-001.patch, > MAPREDUCE-7309-branch-3.2-001.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-branch-3.2-001.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.2-001.patch, > MAPREDUCE-7309-branch-3.3-001.patch, MAPREDUCE-7309.001.patch, > MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-branch-3.3-001.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309-branch-3.3-001.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatedly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-005.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309-005.patch, MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-005.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: (was: MAPREDUCE-7309-005.patch) > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-004.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309-004.patch, > MAPREDUCE-7309.001.patch, MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17237031#comment-17237031 ] Peter Bacsko commented on MAPREDUCE-7309: - Thanks for the patch [~wangda], but I feel it's maybe too complicated. For the sake of simplicity and easier backport, I suggest a lighter approach in patch v3. > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Major > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309.001.patch, > MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7309) Improve performance of reading resource request for mapper/reducers from config
[ https://issues.apache.org/jira/browse/MAPREDUCE-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7309: Attachment: MAPREDUCE-7309-003.patch > Improve performance of reading resource request for mapper/reducers from > config > --- > > Key: MAPREDUCE-7309 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7309 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: applicationmaster >Affects Versions: 3.0.0, 3.1.0, 3.2.0, 3.3.0 >Reporter: Wangda Tan >Assignee: Wangda Tan >Priority: Major > Attachments: MAPREDUCE-7309-003.patch, MAPREDUCE-7309.001.patch, > MAPREDUCE-7309.002.patch > > > This is an issue could affect all the releases which includes YARN-6927. > Basically, we use regex match repeatly when we read mapper/reducer resource > request from config files. When we have large config file, and large number > of splits, it could take a long time. > We saw AM could take hours to parse config when we have 200k+ splits, with a > large config file (hundreds of kbs). > We should do proper caching for pre-configured resource requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7304) Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class
[ https://issues.apache.org/jira/browse/MAPREDUCE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7304: Resolution: Fixed Status: Resolved (was: Patch Available) > Enhance the map-reduce Job end notifier to be able to notify the given URL > via a custom class > - > > Key: MAPREDUCE-7304 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7304 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Daniel Fritsi >Assignee: Zoltán Erdmann >Priority: Major > Fix For: 3.2.2, 3.4.0, 3.1.5, 3.3.1 > > Attachments: MAPREDUCE-7304-001.patch, MAPREDUCE-7304-002.patch, > MAPREDUCE-7304-003.patch, MAPREDUCE-7304-004.patch, > MAPREDUCE-7304-branch-3.1-001.patch, MAPREDUCE-7304-branch-3.2-001.patch, > MAPREDUCE-7304-branch-3.3-001.patch > > > Currently > {color:#0747a6}{{*org.apache.hadoop.mapreduce.v2.app.JobEndNotifier*}}{color} > allows a very limited configuration on how the given Job end notification URL > should be notified. We should enhance this, but instead of adding more > *{color:#0747A6}{{mapreduce.job.end-notification.*}}{color}* properties to be > able to configure the underlying HttpURLConnection, we should add a new > property so users can use their own notifier class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7304) Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class
[ https://issues.apache.org/jira/browse/MAPREDUCE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7304: Fix Version/s: 3.3.1 3.1.5 3.4.0 3.2.2 > Enhance the map-reduce Job end notifier to be able to notify the given URL > via a custom class > - > > Key: MAPREDUCE-7304 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7304 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Daniel Fritsi >Assignee: Zoltán Erdmann >Priority: Major > Fix For: 3.2.2, 3.4.0, 3.1.5, 3.3.1 > > Attachments: MAPREDUCE-7304-001.patch, MAPREDUCE-7304-002.patch, > MAPREDUCE-7304-003.patch, MAPREDUCE-7304-004.patch, > MAPREDUCE-7304-branch-3.1-001.patch, MAPREDUCE-7304-branch-3.2-001.patch, > MAPREDUCE-7304-branch-3.3-001.patch > > > Currently > {color:#0747a6}{{*org.apache.hadoop.mapreduce.v2.app.JobEndNotifier*}}{color} > allows a very limited configuration on how the given Job end notification URL > should be notified. We should enhance this, but instead of adding more > *{color:#0747A6}{{mapreduce.job.end-notification.*}}{color}* properties to be > able to configure the underlying HttpURLConnection, we should add a new > property so users can use their own notifier class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7304) Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class
[ https://issues.apache.org/jira/browse/MAPREDUCE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17236110#comment-17236110 ] Peter Bacsko commented on MAPREDUCE-7304: - Thanks [~zerdmann], committed this to trunk, branch-3.3, branch-3.2 and branch-3.1. > Enhance the map-reduce Job end notifier to be able to notify the given URL > via a custom class > - > > Key: MAPREDUCE-7304 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7304 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Daniel Fritsi >Assignee: Zoltán Erdmann >Priority: Major > Attachments: MAPREDUCE-7304-001.patch, MAPREDUCE-7304-002.patch, > MAPREDUCE-7304-003.patch, MAPREDUCE-7304-004.patch, > MAPREDUCE-7304-branch-3.1-001.patch, MAPREDUCE-7304-branch-3.2-001.patch, > MAPREDUCE-7304-branch-3.3-001.patch > > > Currently > {color:#0747a6}{{*org.apache.hadoop.mapreduce.v2.app.JobEndNotifier*}}{color} > allows a very limited configuration on how the given Job end notification URL > should be notified. We should enhance this, but instead of adding more > *{color:#0747A6}{{mapreduce.job.end-notification.*}}{color}* properties to be > able to configure the underlying HttpURLConnection, we should add a new > property so users can use their own notifier class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7304) Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class
[ https://issues.apache.org/jira/browse/MAPREDUCE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17234669#comment-17234669 ] Peter Bacsko commented on MAPREDUCE-7304: - Thanks [~zerdmann] +1. Remaining checkstyle can be ignored, just javadoc. As we discussed in the chat, please create a patch for branch-3.3 also because that's relatively new. > Enhance the map-reduce Job end notifier to be able to notify the given URL > via a custom class > - > > Key: MAPREDUCE-7304 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7304 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Daniel Fritsi >Assignee: Zoltán Erdmann >Priority: Major > Attachments: MAPREDUCE-7304-001.patch, MAPREDUCE-7304-002.patch, > MAPREDUCE-7304-003.patch, MAPREDUCE-7304-004.patch > > > Currently > {color:#0747a6}{{*org.apache.hadoop.mapreduce.v2.app.JobEndNotifier*}}{color} > allows a very limited configuration on how the given Job end notification URL > should be notified. We should enhance this, but instead of adding more > *{color:#0747A6}{{mapreduce.job.end-notification.*}}{color}* properties to be > able to configure the underlying HttpURLConnection, we should add a new > property so users can use their own notifier class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7304) Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class
[ https://issues.apache.org/jira/browse/MAPREDUCE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233521#comment-17233521 ] Peter Bacsko commented on MAPREDUCE-7304: - {{Class.newInstance}} is deprecated, replace that to {{Class.getDeclaredConstructor().newInstance()}}. > Enhance the map-reduce Job end notifier to be able to notify the given URL > via a custom class > - > > Key: MAPREDUCE-7304 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7304 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Daniel Fritsi >Assignee: Zoltán Erdmann >Priority: Major > Attachments: MAPREDUCE-7304-001.patch > > > Currently > {color:#0747a6}{{*org.apache.hadoop.mapreduce.v2.app.JobEndNotifier*}}{color} > allows a very limited configuration on how the given Job end notification URL > should be notified. We should enhance this, but instead of adding more > *{color:#0747A6}{{mapreduce.job.end-notification.*}}{color}* properties to be > able to configure the underlying HttpURLConnection, we should add a new > property so users can use their own notifier class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7304) Enhance the map-reduce Job end notifier to be able to notify the given URL via a custom class
[ https://issues.apache.org/jira/browse/MAPREDUCE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233517#comment-17233517 ] Peter Bacsko commented on MAPREDUCE-7304: - [~zerdmann] please fix the checkstyle issues, there are quite a few. > Enhance the map-reduce Job end notifier to be able to notify the given URL > via a custom class > - > > Key: MAPREDUCE-7304 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7304 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Daniel Fritsi >Assignee: Zoltán Erdmann >Priority: Major > Attachments: MAPREDUCE-7304-001.patch > > > Currently > {color:#0747a6}{{*org.apache.hadoop.mapreduce.v2.app.JobEndNotifier*}}{color} > allows a very limited configuration on how the given Job end notification URL > should be notified. We should enhance this, but instead of adding more > *{color:#0747A6}{{mapreduce.job.end-notification.*}}{color}* properties to be > able to configure the underlying HttpURLConnection, we should add a new > property so users can use their own notifier class. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Resolution: Fixed Status: Resolved (was: Patch Available) > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Fix Version/s: 3.4.0 > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Fix For: 3.4.0 > > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222065#comment-17222065 ] Peter Bacsko commented on MAPREDUCE-7302: - Thanks for the review [~aajisaka], I committed this to trunk. Do we need a backport to other branches? > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221567#comment-17221567 ] Peter Bacsko commented on MAPREDUCE-7302: - [~aajisaka] you need to compile hadoop-common with "-Pnative" to have libhadoop created. Otherwise {{ReadaheadPool}} doesn't create an instance. > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17220776#comment-17220776 ] Peter Bacsko commented on MAPREDUCE-7302: - Ok, now the build is green. ping [~aajisaka] > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7303) Fix TestJobResourceUploader failures after HADOOP-16878
[ https://issues.apache.org/jira/browse/MAPREDUCE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218244#comment-17218244 ] Peter Bacsko commented on MAPREDUCE-7303: - [~aajisaka] pls review this patch. > Fix TestJobResourceUploader failures after HADOOP-16878 > --- > > Key: MAPREDUCE-7303 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7303 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: test > Attachments: MAPREDUCE-7303-001.patch > > > Currently, two test cases fail with NPE: > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathIsRoot()}} > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathEndsInSlash()}} > Root cause is the src/dst qualified path check introduced by HADOOP-16878. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218243#comment-17218243 ] Peter Bacsko commented on MAPREDUCE-7302: - [~aajisaka] could you take a look & commit it? Thanks. > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7303) Fix TestJobResourceUploader failures after HADOOP-16878
[ https://issues.apache.org/jira/browse/MAPREDUCE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7303: Status: Patch Available (was: Open) > Fix TestJobResourceUploader failures after HADOOP-16878 > --- > > Key: MAPREDUCE-7303 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7303 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7303-001.patch > > > Currently, two test cases fail with NPE: > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathIsRoot()}} > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathEndsInSlash()}} > Root cause is the src/dst qualified path check introduced by HADOOP-16878. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7303) Fix TestJobResourceUploader failures after HADOOP-16878
[ https://issues.apache.org/jira/browse/MAPREDUCE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7303: Labels: test (was: ) > Fix TestJobResourceUploader failures after HADOOP-16878 > --- > > Key: MAPREDUCE-7303 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7303 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Labels: test > Attachments: MAPREDUCE-7303-001.patch > > > Currently, two test cases fail with NPE: > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathIsRoot()}} > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathEndsInSlash()}} > Root cause is the src/dst qualified path check introduced by HADOOP-16878. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218156#comment-17218156 ] Peter Bacsko commented on MAPREDUCE-7302: - Created MAPREDUCE-7303 for the unit test failure. It was caused by a recent commit. > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7303) Fix TestJobResourceUploader failures after HADOOP-16878
[ https://issues.apache.org/jira/browse/MAPREDUCE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7303: Attachment: MAPREDUCE-7303-001.patch > Fix TestJobResourceUploader failures after HADOOP-16878 > --- > > Key: MAPREDUCE-7303 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7303 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7303-001.patch > > > Currently, two test cases fail with NPE: > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathIsRoot()}} > {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathEndsInSlash()}} > Root cause is the src/dst qualified path check introduced by HADOOP-16878. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Created] (MAPREDUCE-7303) Fix TestJobResourceUploader failures after HADOOP-16878
Peter Bacsko created MAPREDUCE-7303: --- Summary: Fix TestJobResourceUploader failures after HADOOP-16878 Key: MAPREDUCE-7303 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7303 Project: Hadoop Map/Reduce Issue Type: Bug Reporter: Peter Bacsko Assignee: Peter Bacsko Currently, two test cases fail with NPE: {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathIsRoot()}} {{org.apache.hadoop.mapreduce.TestJobResourceUploader.testOriginalPathEndsInSlash()}} Root cause is the src/dst qualified path check introduced by HADOOP-16878. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17217849#comment-17217849 ] Peter Bacsko commented on MAPREDUCE-7302: - The failure {{TestJobResourceUploader}} is weird, I don't think it's related, but I'll check it out tomorrow regardless. > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Attachment: MAPREDUCE-7302-003.patch > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17217632#comment-17217632 ] Peter Bacsko commented on MAPREDUCE-7302: - Sure. Uploaded patch v3 with the new import. > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch, > MAPREDUCE-7302-003.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Attachment: MAPREDUCE-7302-002.patch > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch, MAPREDUCE-7302-002.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Status: Patch Available (was: Open) > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Attachment: MAPREDUCE-7302-001.patch > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7302-001.patch > > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Summary: Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to fail (was: Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() to fail) > Upgrading to JUnit 4.13 causes testcase TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() to fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Summary: Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() to fail (was: Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() fail) > Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() to > fail > > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() fail
[ https://issues.apache.org/jira/browse/MAPREDUCE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7302: Description: See related ticket YARN-10460. JUnit 4.13 causes the same failure: {noformat} [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher [ERROR] testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time elapsed: 0.15 s <<< ERROR! java.lang.IllegalThreadStateException at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) at java.lang.Thread.init(Thread.java:405) at java.lang.Thread.init(Thread.java:349) at java.lang.Thread.(Thread.java:678) at java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) at org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) at java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) at org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) at org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) at org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) at org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) at org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) {noformat} was:See related ticket YARN-10460. JUnit 4.13 causes the same test failure. > Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() fail > - > > Key: MAPREDUCE-7302 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: test >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > See related ticket YARN-10460. JUnit 4.13 causes the same failure: > {noformat} > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 1.851 s <<< FAILURE! - in org.apache.hadoop.mapreduce.task.reduce.TestFetcher > [ERROR] > testCorruptedIFile(org.apache.hadoop.mapreduce.task.reduce.TestFetcher) Time > elapsed: 0.15 s <<< ERROR! > java.lang.IllegalThreadStateException > at java.lang.ThreadGroup.addUnstarted(ThreadGroup.java:867) > at java.lang.Thread.init(Thread.java:405) > at java.lang.Thread.init(Thread.java:349) > at java.lang.Thread.(Thread.java:678) > at > java.util.concurrent.Executors$DefaultThreadFactory.newThread(Executors.java:613) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder$1.newThread(ThreadFactoryBuilder.java:163) > at > java.util.concurrent.ThreadPoolExecutor$Worker.(ThreadPoolExecutor.java:619) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:932) > at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367) > at > org.apache.hadoop.io.ReadaheadPool.submitReadahead(ReadaheadPool.java:159) > at > org.apache.hadoop.io.ReadaheadPool.readaheadStream(ReadaheadPool.java:141) > at > org.apache.hadoop.mapred.IFileInputStream.doReadahead(IFileInputStream.java:159) > at > org.apache.hadoop.mapred.IFileInputStream.(IFileInputStream.java:88) > at > org.apache.hadoop.mapreduce.task.reduce.TestFetcher.testCorruptedIFile(TestFetcher.java:587) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Created] (MAPREDUCE-7302) Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() fail
Peter Bacsko created MAPREDUCE-7302: --- Summary: Upgrading to JUnit 4.13 causes tests in TestFetcher.testCorruptedIFile() fail Key: MAPREDUCE-7302 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7302 Project: Hadoop Map/Reduce Issue Type: Bug Components: test Reporter: Peter Bacsko Assignee: Peter Bacsko See related ticket YARN-10460. JUnit 4.13 causes the same test failure. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
[ https://issues.apache.org/jira/browse/MAPREDUCE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17084733#comment-17084733 ] Peter Bacsko commented on MAPREDUCE-7273: - [~eyang] thanks, makes perfect sense. I updated the patch. > JHS: make sure that Kerberos relogin is performed when KDC becomes offline > then online again > > > Key: MAPREDUCE-7273 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7273-001.patch, MAPREDUCE-7273-002.patch > > > In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's > not always enough. You have to wait for 60 seconds for the next retry. In the > meantime, if the KDC comes back, the following error might occur: > {noformat} > 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) > - Have read input token of size 708 for processing by > saslServer.evaluateResponse() > 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid > argument (400) - Cannot find key of appropriate type to decrypt AP REP - > AES128 CTS mode with HMAC SHA1-96)] > at > com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) > ... > {noformat} > When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
[ https://issues.apache.org/jira/browse/MAPREDUCE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7273: Attachment: MAPREDUCE-7273-002.patch > JHS: make sure that Kerberos relogin is performed when KDC becomes offline > then online again > > > Key: MAPREDUCE-7273 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7273-001.patch, MAPREDUCE-7273-002.patch > > > In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's > not always enough. You have to wait for 60 seconds for the next retry. In the > meantime, if the KDC comes back, the following error might occur: > {noformat} > 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) > - Have read input token of size 708 for processing by > saslServer.evaluateResponse() > 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid > argument (400) - Cannot find key of appropriate type to decrypt AP REP - > AES128 CTS mode with HMAC SHA1-96)] > at > com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) > ... > {noformat} > When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
[ https://issues.apache.org/jira/browse/MAPREDUCE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7273: Status: Patch Available (was: Open) > JHS: make sure that Kerberos relogin is performed when KDC becomes offline > then online again > > > Key: MAPREDUCE-7273 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Affects Versions: 3.1.3, 3.2.1, 2.10.0 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7273-001.patch > > > In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's > not always enough. You have to wait for 60 seconds for the next retry. In the > meantime, if the KDC comes back, the following error might occur: > {noformat} > 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) > - Have read input token of size 708 for processing by > saslServer.evaluateResponse() > 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid > argument (400) - Cannot find key of appropriate type to decrypt AP REP - > AES128 CTS mode with HMAC SHA1-96)] > at > com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) > ... > {noformat} > When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
[ https://issues.apache.org/jira/browse/MAPREDUCE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7273: Attachment: MAPREDUCE-7273-001.patch > JHS: make sure that Kerberos relogin is performed when KDC becomes offline > then online again > > > Key: MAPREDUCE-7273 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7273-001.patch > > > In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's > not always enough. You have to wait for 60 seconds for the next retry. In the > meantime, if the KDC comes back, the following error might occur: > {noformat} > 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) > - Have read input token of size 708 for processing by > saslServer.evaluateResponse() > 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid > argument (400) - Cannot find key of appropriate type to decrypt AP REP - > AES128 CTS mode with HMAC SHA1-96)] > at > com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) > ... > {noformat} > When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
[ https://issues.apache.org/jira/browse/MAPREDUCE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7273: Affects Version/s: 2.10.0 3.2.1 3.1.3 > JHS: make sure that Kerberos relogin is performed when KDC becomes offline > then online again > > > Key: MAPREDUCE-7273 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Affects Versions: 2.10.0, 3.2.1, 3.1.3 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's > not always enough. You have to wait for 60 seconds for the next retry. In the > meantime, if the KDC comes back, the following error might occur: > {noformat} > 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) > - Have read input token of size 708 for processing by > saslServer.evaluateResponse() > 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid > argument (400) - Cannot find key of appropriate type to decrypt AP REP - > AES128 CTS mode with HMAC SHA1-96)] > at > com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) > ... > {noformat} > When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Created] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
Peter Bacsko created MAPREDUCE-7273: --- Summary: JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again Key: MAPREDUCE-7273 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 Project: Hadoop Map/Reduce Issue Type: Bug Reporter: Peter Bacsko Assignee: Peter Bacsko In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's not always enough. You have to wait for 60 seconds for the next retry. In the meantime, if the KDC comes back, the following error might occur: {noformat} 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) - Have read input token of size 708 for processing by saslServer.evaluateResponse() 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid argument (400) - Cannot find key of appropriate type to decrypt AP REP - AES128 CTS mode with HMAC SHA1-96)] at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) ... {noformat} When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7273) JHS: make sure that Kerberos relogin is performed when KDC becomes offline then online again
[ https://issues.apache.org/jira/browse/MAPREDUCE-7273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7273: Component/s: jobhistoryserver > JHS: make sure that Kerberos relogin is performed when KDC becomes offline > then online again > > > Key: MAPREDUCE-7273 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7273 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > In JHS, if the KDC goes offline, the IPC layer does try to relogin, but it's > not always enough. You have to wait for 60 seconds for the next retry. In the > meantime, if the KDC comes back, the following error might occur: > {noformat} > 2020-04-09 03:27:52,075 DEBUG ipc.Server (Server.java:processSaslToken(1952)) > - Have read input token of size 708 for processing by > saslServer.evaluateResponse() > 2020-04-09 03:27:52,077 DEBUG ipc.Server (Server.java:saslProcess(1829)) - > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: Failure unspecified at GSS-API level (Mechanism level: Invalid > argument (400) - Cannot find key of appropriate type to decrypt AP REP - > AES128 CTS mode with HMAC SHA1-96)] > at > com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199) > ... > {noformat} > When this happens, JHS has to be restarted. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7250) FrameworkUploader: skip replication check entirely if timeout == 0
[ https://issues.apache.org/jira/browse/MAPREDUCE-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987878#comment-16987878 ] Peter Bacsko commented on MAPREDUCE-7250: - [~prabhujoseph] could you pls review? > FrameworkUploader: skip replication check entirely if timeout == 0 > -- > > Key: MAPREDUCE-7250 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7250-001.patch > > > The framework uploader tool has this piece of code which makes sure that all > block of the uploaded mapreduce tarball has been replicated: > {noformat} > while(endTime - startTime < timeout * 1000 && >currentReplication < acceptableReplication) { > Thread.sleep(1000); > endTime = System.currentTimeMillis(); > currentReplication = getSmallestReplicatedBlockCount(); > } > {noformat} > There are cases, however, when we don't want to wait for this (eg. we want to > speed up Hadoop installation). > I suggest adding {{--skiprelicationcheck}} switch which disables this > replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7250) FrameworkUploader: skip replication check entirely if timeout == 0
[ https://issues.apache.org/jira/browse/MAPREDUCE-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987876#comment-16987876 ] Peter Bacsko commented on MAPREDUCE-7250: - Haven't written a test because it's not obvious how to test + change itself is trivial. > FrameworkUploader: skip replication check entirely if timeout == 0 > -- > > Key: MAPREDUCE-7250 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7250-001.patch > > > The framework uploader tool has this piece of code which makes sure that all > block of the uploaded mapreduce tarball has been replicated: > {noformat} > while(endTime - startTime < timeout * 1000 && >currentReplication < acceptableReplication) { > Thread.sleep(1000); > endTime = System.currentTimeMillis(); > currentReplication = getSmallestReplicatedBlockCount(); > } > {noformat} > There are cases, however, when we don't want to wait for this (eg. we want to > speed up Hadoop installation). > I suggest adding {{--skiprelicationcheck}} switch which disables this > replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7250) FrameworkUploader: skip replication check entirely if timeout == 0
[ https://issues.apache.org/jira/browse/MAPREDUCE-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7250: Summary: FrameworkUploader: skip replication check entirely if timeout == 0 (was: FrameworkUploader: add option to skip replication check) > FrameworkUploader: skip replication check entirely if timeout == 0 > -- > > Key: MAPREDUCE-7250 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7250-001.patch > > > The framework uploader tool has this piece of code which makes sure that all > block of the uploaded mapreduce tarball has been replicated: > {noformat} > while(endTime - startTime < timeout * 1000 && >currentReplication < acceptableReplication) { > Thread.sleep(1000); > endTime = System.currentTimeMillis(); > currentReplication = getSmallestReplicatedBlockCount(); > } > {noformat} > There are cases, however, when we don't want to wait for this (eg. we want to > speed up Hadoop installation). > I suggest adding {{--skiprelicationcheck}} switch which disables this > replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7250) FrameworkUploader: add option to skip replication check
[ https://issues.apache.org/jira/browse/MAPREDUCE-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7250: Status: Patch Available (was: Open) > FrameworkUploader: add option to skip replication check > --- > > Key: MAPREDUCE-7250 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7250-001.patch > > > The framework uploader tool has this piece of code which makes sure that all > block of the uploaded mapreduce tarball has been replicated: > {noformat} > while(endTime - startTime < timeout * 1000 && >currentReplication < acceptableReplication) { > Thread.sleep(1000); > endTime = System.currentTimeMillis(); > currentReplication = getSmallestReplicatedBlockCount(); > } > {noformat} > There are cases, however, when we don't want to wait for this (eg. we want to > speed up Hadoop installation). > I suggest adding {{--skiprelicationcheck}} switch which disables this > replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7250) FrameworkUploader: add option to skip replication check
[ https://issues.apache.org/jira/browse/MAPREDUCE-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7250: Attachment: MAPREDUCE-7250-001.patch > FrameworkUploader: add option to skip replication check > --- > > Key: MAPREDUCE-7250 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7250-001.patch > > > The framework uploader tool has this piece of code which makes sure that all > block of the uploaded mapreduce tarball has been replicated: > {noformat} > while(endTime - startTime < timeout * 1000 && >currentReplication < acceptableReplication) { > Thread.sleep(1000); > endTime = System.currentTimeMillis(); > currentReplication = getSmallestReplicatedBlockCount(); > } > {noformat} > There are cases, however, when we don't want to wait for this (eg. we want to > speed up Hadoop installation). > I suggest adding {{--skiprelicationcheck}} switch which disables this > replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7250) FrameworkUploader: add option to skip replication check
[ https://issues.apache.org/jira/browse/MAPREDUCE-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16987767#comment-16987767 ] Peter Bacsko commented on MAPREDUCE-7250: - After some discussion, it's probably easier to treat {{--timeout 0}} in a special way. If timeout == 0, we don't need a new switch and we don't print the following error message {noformat} if (endTime - startTime >= timeout * 1000) { LOG.error(String.format( "Timed out after %d seconds while waiting for acceptable" + " replication of %d (current replication is %d)", timeout, acceptableReplication, currentReplication)); } {noformat} > FrameworkUploader: add option to skip replication check > --- > > Key: MAPREDUCE-7250 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 > Project: Hadoop Map/Reduce > Issue Type: Improvement > Components: mrv2 >Reporter: Peter Bacsko >Assignee: Peter Bacsko >Priority: Major > > The framework uploader tool has this piece of code which makes sure that all > block of the uploaded mapreduce tarball has been replicated: > {noformat} > while(endTime - startTime < timeout * 1000 && >currentReplication < acceptableReplication) { > Thread.sleep(1000); > endTime = System.currentTimeMillis(); > currentReplication = getSmallestReplicatedBlockCount(); > } > {noformat} > There are cases, however, when we don't want to wait for this (eg. we want to > speed up Hadoop installation). > I suggest adding {{--skiprelicationcheck}} switch which disables this > replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Created] (MAPREDUCE-7250) FrameworkUploader: add option to skip replication check
Peter Bacsko created MAPREDUCE-7250: --- Summary: FrameworkUploader: add option to skip replication check Key: MAPREDUCE-7250 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7250 Project: Hadoop Map/Reduce Issue Type: Improvement Components: mrv2 Reporter: Peter Bacsko Assignee: Peter Bacsko The framework uploader tool has this piece of code which makes sure that all block of the uploaded mapreduce tarball has been replicated: {noformat} while(endTime - startTime < timeout * 1000 && currentReplication < acceptableReplication) { Thread.sleep(1000); endTime = System.currentTimeMillis(); currentReplication = getSmallestReplicatedBlockCount(); } {noformat} There are cases, however, when we don't want to wait for this (eg. we want to speed up Hadoop installation). I suggest adding {{--skiprelicationcheck}} switch which disables this replication test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7249) Invalid event TA_TOO_MANY_FETCH_FAILURE at SUCCESS_CONTAINER_CLEANUP causes job failure
[ https://issues.apache.org/jira/browse/MAPREDUCE-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983547#comment-16983547 ] Peter Bacsko commented on MAPREDUCE-7249: - The checkstyle problem in \{{TestTaskAttempt.java}} might be worth fixing. The another is not. Otherwise +1 (non-binding) from me. > Invalid event TA_TOO_MANY_FETCH_FAILURE at SUCCESS_CONTAINER_CLEANUP causes > job failure > > > Key: MAPREDUCE-7249 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7249 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: applicationmaster, mrv2 >Affects Versions: 3.1.0 >Reporter: Wilfred Spiegelenburg >Assignee: Wilfred Spiegelenburg >Priority: Critical > Attachments: MAPREDUCE-7249-001.patch > > > Same issue as in MAPREDUCE-7240 but this one has a different state in which > the Exception {{TA_TOO_MANY_FETCH_FAILURE}} event is received: > {code} > 2019-11-18 23:03:40,270 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1568654141590_630203_m_003108_1 > org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_CONTAINER_CLEANUP > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1183) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:148) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1388) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1380) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109) > {code} > The stack trace is from a CDH release which is highly patched 2.6 release. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.1.001.patch, MAPREDUCE-7240-branch-3.2.001.patch, > MAPREDUCE-7240-branch-3.2.001.patch, application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Fix Version/s: 3.2.2 3.1.4 > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0, 3.1.4, 3.2.2 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.1.001.patch, MAPREDUCE-7240-branch-3.2.001.patch, > MAPREDUCE-7240-branch-3.2.001.patch, application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at >
[jira] [Commented] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983488#comment-16983488 ] Peter Bacsko commented on MAPREDUCE-7240: - Ok, the patch has been committed in the meantime to branch-3.2 > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.1.001.patch, MAPREDUCE-7240-branch-3.2.001.patch, > MAPREDUCE-7240-branch-3.2.001.patch, application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Attachment: MAPREDUCE-7240-branch-3.2.001.patch > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.1.001.patch, MAPREDUCE-7240-branch-3.2.001.patch, > MAPREDUCE-7240-branch-3.2.001.patch, application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Attachment: MAPREDUCE-7240-branch-3.1.001.patch > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.1.001.patch, MAPREDUCE-7240-branch-3.2.001.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Status: Patch Available (was: Reopened) > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.2.001.patch, application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Attachment: MAPREDUCE-7240-branch-3.2.001.patch > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > MAPREDUCE-7240-branch-3.2.001.patch, application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) >
[jira] [Reopened] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko reopened MAPREDUCE-7240: - Reopening it to attach patches for branch-3.2 and branch-3.1. > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: Reviewed, applicationmaster, mrv2 > Fix For: 3.3.0 > > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at >
[jira] [Commented] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16983222#comment-16983222 ] Peter Bacsko commented on MAPREDUCE-7240: - Thanks [~prabhujoseph] - I missed that. V2 should be good. > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: applicationmaster, mrv2 > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Labels: applicationmaster mrv2 (was: kerberos) > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: applicationmaster, mrv2 > Attachments: MAPREDUCE-7240-001.patch, MAPREDUCE-7240-002.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at >
[jira] [Commented] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982771#comment-16982771 ] Peter Bacsko commented on MAPREDUCE-7240: - Checkstyle can be ignored - the rest of code uses the same identation level. [~wilfreds], [~aajisaka] could you review the patch pls? > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: kerberos > Attachments: MAPREDUCE-7240-001.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) >
[jira] [Commented] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982604#comment-16982604 ] Peter Bacsko commented on MAPREDUCE-7240: - I took the liberty of rebasing to the patch to trunk. Diff is based on the last 13 commits from https://github.com/chimney-lee/hadoop/tree/branch-2.8. > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: kerberos > Attachments: MAPREDUCE-7240-001.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at >
[jira] [Updated] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7240: Attachment: MAPREDUCE-7240-001.patch > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: kerberos > Attachments: MAPREDUCE-7240-001.patch, > application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at >
[jira] [Commented] (MAPREDUCE-7240) Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER' cause job error
[ https://issues.apache.org/jira/browse/MAPREDUCE-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16982321#comment-16982321 ] Peter Bacsko commented on MAPREDUCE-7240: - To me this looks like a legitimate solution. If a reducer cannot fetch the intermediate data from a mapper, then let's kill the current attempt and schedule a new one. We do the same thing if the attempt is already in SUCCEEDED state, so this is the most reasonable approach. > Exception ' Invalid event: TA_TOO_MANY_FETCH_FAILURE at > SUCCESS_FINISHING_CONTAINER' cause job error > > > Key: MAPREDUCE-7240 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7240 > Project: Hadoop Map/Reduce > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: luhuachao >Assignee: luhuachao >Priority: Critical > Labels: kerberos > Attachments: application_1566552310686_260041.log > > > *log in appmaster* > {noformat} > 2019-09-03 17:18:43,090 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_52_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_49_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_51_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_50_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,091 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Too many fetch-failures > for output of task attempt: attempt_1566552310686_260041_m_53_0 ... > raising fetch failure to map > 2019-09-03 17:18:43,092 INFO [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: > attempt_1566552310686_260041_m_52_0 transitioned from state SUCCEEDED to > FAILED, event type is TA_TOO_MANY_FETCH_FAILURE and nodeId=yarn095:45454 > 2019-09-03 17:18:43,092 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_49_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1458) > at > org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher.handle(MRAppMaster.java:1450) > at > org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184) > at > org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110) > at java.lang.Thread.run(Thread.java:745) > 2019-09-03 17:18:43,093 ERROR [AsyncDispatcher event handler] > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Can't handle > this event at current state for attempt_1566552310686_260041_m_51_0 > org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: > TA_TOO_MANY_FETCH_FAILURE at SUCCESS_FINISHING_CONTAINER > at > org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) > at > org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) > at > org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:1206) > at > org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl.handle(TaskAttemptImpl.java:146) >
[jira] [Reopened] (MAPREDUCE-6441) Improve temporary directory name generation in LocalDistributedCacheManager for concurrent processes
[ https://issues.apache.org/jira/browse/MAPREDUCE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko reopened MAPREDUCE-6441: - Reopening this to attach patch for branch-3.1 too. > Improve temporary directory name generation in LocalDistributedCacheManager > for concurrent processes > > > Key: MAPREDUCE-6441 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6441 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: William Watson >Assignee: Haibo Chen >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-10924.02.patch, > HADOOP-10924.03.jobid-plus-uuid.patch, MAPREDUCE-6441-branch-3.1.001.patch, > MAPREDUCE-6441.004.patch, MAPREDUCE-6441.005.patch, MAPREDUCE-6441.006.patch, > MAPREDUCE-6441.008.patch, MAPREDUCE-6441.009.patch, MAPREDUCE-6441.010.patch, > MAPREDUCE-6441.011.patch > > > Kicking off many sqoop processes in different threads results in: > {code} > 2014-08-01 13:47:24 -0400: INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: > Encountered IOException running import job: java.io.IOException: > java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot > overwrite non empty destination directory > /tmp/hadoop-hadoop/mapred/local/1406915233073 > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) > 2014-08-01 13:47:24 -0400: INFO -at > java.security.AccessController.doPrivileged(Native Method) > 2014-08-01 13:47:24 -0400: INFO -at > javax.security.auth.Subject.doAs(Subject.java:415) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.run(Sqoop.java:145) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runTool(Sqoop.java:220) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runTool(Sqoop.java:229) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.main(Sqoop.java:238) > {code} > If two are kicked off in the same second. The issue is the following lines of > code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: > {code} > // Generating unique numbers for FSDownload. > AtomicLong uniqueNumberGenerator = >new AtomicLong(System.currentTimeMillis()); > {code} > and > {code} > Long.toString(uniqueNumberGenerator.incrementAndGet())), > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-6441) Improve temporary directory name generation in LocalDistributedCacheManager for concurrent processes
[ https://issues.apache.org/jira/browse/MAPREDUCE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-6441: Status: Patch Available (was: Reopened) > Improve temporary directory name generation in LocalDistributedCacheManager > for concurrent processes > > > Key: MAPREDUCE-6441 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6441 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: William Watson >Assignee: Haibo Chen >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-10924.02.patch, > HADOOP-10924.03.jobid-plus-uuid.patch, MAPREDUCE-6441-branch-3.1.001.patch, > MAPREDUCE-6441.004.patch, MAPREDUCE-6441.005.patch, MAPREDUCE-6441.006.patch, > MAPREDUCE-6441.008.patch, MAPREDUCE-6441.009.patch, MAPREDUCE-6441.010.patch, > MAPREDUCE-6441.011.patch > > > Kicking off many sqoop processes in different threads results in: > {code} > 2014-08-01 13:47:24 -0400: INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: > Encountered IOException running import job: java.io.IOException: > java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot > overwrite non empty destination directory > /tmp/hadoop-hadoop/mapred/local/1406915233073 > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) > 2014-08-01 13:47:24 -0400: INFO -at > java.security.AccessController.doPrivileged(Native Method) > 2014-08-01 13:47:24 -0400: INFO -at > javax.security.auth.Subject.doAs(Subject.java:415) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.run(Sqoop.java:145) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runTool(Sqoop.java:220) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runTool(Sqoop.java:229) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.main(Sqoop.java:238) > {code} > If two are kicked off in the same second. The issue is the following lines of > code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: > {code} > // Generating unique numbers for FSDownload. > AtomicLong uniqueNumberGenerator = >new AtomicLong(System.currentTimeMillis()); > {code} > and > {code} > Long.toString(uniqueNumberGenerator.incrementAndGet())), > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-6441) Improve temporary directory name generation in LocalDistributedCacheManager for concurrent processes
[ https://issues.apache.org/jira/browse/MAPREDUCE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-6441: Attachment: MAPREDUCE-6441-branch-3.1.001.patch > Improve temporary directory name generation in LocalDistributedCacheManager > for concurrent processes > > > Key: MAPREDUCE-6441 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6441 > Project: Hadoop Map/Reduce > Issue Type: Bug >Reporter: William Watson >Assignee: Haibo Chen >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-10924.02.patch, > HADOOP-10924.03.jobid-plus-uuid.patch, MAPREDUCE-6441-branch-3.1.001.patch, > MAPREDUCE-6441.004.patch, MAPREDUCE-6441.005.patch, MAPREDUCE-6441.006.patch, > MAPREDUCE-6441.008.patch, MAPREDUCE-6441.009.patch, MAPREDUCE-6441.010.patch, > MAPREDUCE-6441.011.patch > > > Kicking off many sqoop processes in different threads results in: > {code} > 2014-08-01 13:47:24 -0400: INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: > Encountered IOException running import job: java.io.IOException: > java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot > overwrite non empty destination directory > /tmp/hadoop-hadoop/mapred/local/1406915233073 > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) > 2014-08-01 13:47:24 -0400: INFO -at > java.security.AccessController.doPrivileged(Native Method) > 2014-08-01 13:47:24 -0400: INFO -at > javax.security.auth.Subject.doAs(Subject.java:415) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.run(Sqoop.java:145) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runTool(Sqoop.java:220) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.runTool(Sqoop.java:229) > 2014-08-01 13:47:24 -0400: INFO -at > org.apache.sqoop.Sqoop.main(Sqoop.java:238) > {code} > If two are kicked off in the same second. The issue is the following lines of > code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: > {code} > // Generating unique numbers for FSDownload. > AtomicLong uniqueNumberGenerator = >new AtomicLong(System.currentTimeMillis()); > {code} > and > {code} > Long.toString(uniqueNumberGenerator.incrementAndGet())), > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898063#comment-16898063 ] Peter Bacsko commented on MAPREDUCE-7225: - [~snemeth] patches are good to go into branch-3.1 and branch-3.2. > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch, MAPREDUCE-7225-003.patch, > MAPREDUCE-7225.branch-3.1.001.patch, MAPREDUCE-7225.branch-3.2.001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7225: Attachment: MAPREDUCE-7225.branch-3.1.001.patch > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch, MAPREDUCE-7225-003.patch, > MAPREDUCE-7225.branch-3.1.001.patch, MAPREDUCE-7225.branch-3.2.001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7225: Attachment: MAPREDUCE-7225.branch-3.2.001.patch > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch, MAPREDUCE-7225-003.patch, > MAPREDUCE-7225.branch-3.2.001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7225: Attachment: MAPREDUCE-7225-003.patch > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch, MAPREDUCE-7225-003.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897181#comment-16897181 ] Peter Bacsko commented on MAPREDUCE-7225: - Thanks [~adam.antal] for the comments. "Could you please add extra slash in line 400 and 403?" Yep, in fact, a more correct URI is like {{hdfs://localhost:1234/path/}}. "maybe we can use a spy instead of a mock Path object?" Will modify this. "Another question: what happens if we give the root folder ("/") to the JobSubmitter?" Indeed that's another scenario when {{JobSubmitter}} fails. Root folder will be handled separately and a new testcase is necessary. > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896992#comment-16896992 ] Peter Bacsko commented on MAPREDUCE-7225: - Quick update: I tried this code: {noformat} URI uri = new URI("file://home/hadoop//"); System.out.println(uri.normalize()); {noformat} which prints "file://home/hadoop/". Since we do call {{URI.normalize()}} when constructing {{Path}}, it's enough if we handle the single "/" at the end. > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands,
[jira] [Updated] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7225: Attachment: MAPREDUCE-7225-002.patch > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch, > MAPREDUCE-7225-002.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16896169#comment-16896169 ] Peter Bacsko commented on MAPREDUCE-7225: - Uploaded patch v2 with a unit test. Without the fix, the test fails, but with the proposed changes it passes. > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7225: Attachment: MAPREDUCE-7225-002.patch > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch, MAPREDUCE-7225-002.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Commented] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895923#comment-16895923 ] Peter Bacsko commented on MAPREDUCE-7225: - [~wilfreds] [~aajisaka] any ideas? > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Updated] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Bacsko updated MAPREDUCE-7225: Status: Patch Available (was: In Progress) > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 3.0.3, 2.9.0 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Work stopped] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on MAPREDUCE-7225 stopped by Peter Bacsko. --- > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org
[jira] [Work started] (MAPREDUCE-7225) Fix broken current folder expansion during MR job start
[ https://issues.apache.org/jira/browse/MAPREDUCE-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on MAPREDUCE-7225 started by Peter Bacsko. --- > Fix broken current folder expansion during MR job start > --- > > Key: MAPREDUCE-7225 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-7225 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: 2.9.0, 3.0.3 >Reporter: Adam Antal >Assignee: Peter Bacsko >Priority: Major > Attachments: MAPREDUCE-7225-001.patch > > > Starting a sleep job giving "." as files that should be localized is working > fine up until 2.9.0, but after that the user is given an > IllegalArgumentException. This change is a side-effect of HADOOP-12747 where > {{GenericOptionsParser#validateFiles}} function got modified. > Can be reproduced by starting a sleep job with "-files ." given as extra > parameter. Log: > {noformat} > sudo -u hdfs hadoop jar hadoop-mapreduce-client-jobclient-3.0.0.jar sleep > -files . -m 1 -r 1 -rt 2000 -mt 2000 > WARNING: Use "yarn jar" to launch YARN applications. > 19/07/17 08:13:26 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm21 > 19/07/17 08:13:26 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/hdfs/.staging/job_1563349475208_0017 > 19/07/17 08:13:26 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/hdfs/.staging/job_1563349475208_0017 > java.lang.IllegalArgumentException: Can not create a Path from an empty string > at org.apache.hadoop.fs.Path.checkPathArg(Path.java:168) > at org.apache.hadoop.fs.Path.(Path.java:180) > at org.apache.hadoop.fs.Path.(Path.java:125) > at > org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:686) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:262) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:203) > at > org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:131) > at > org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) > at > org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) > at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726) > at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) > at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) > at org.apache.hadoop.mapreduce.SleepJob.run(SleepJob.java:273) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.mapreduce.SleepJob.main(SleepJob.java:194) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) > at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) > at > org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:139) > at > org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:313) > at org.apache.hadoop.util.RunJar.main(RunJar.java:227) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org