[
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291005#comment-16291005
]
Jason Lowe commented on HDFS-12920:
-----------------------------------
This only occurs if the job submitter is using 3.x jars and the submitted job
is using 2.x jars. If the job submitter is using the same jars as the code
then this does not happen, since the values copied from hdfs-default.xml into
job.xml as part of job submission are compatible with the parsing code.
So another workaround is to have at least two tarballs on HDFS, one that uses
3.x and one that uses 2.x. The 3.x site configs request the 3.x tarball and
the 2.x site configs request the 2.x tarball. When the job submitter client
upgrades to use 3.x jars, it can also upgrade to 3.x configs to start running
the job with 3.x as well.
> HDFS default value change (with adding time unit) breaks old version MR
> tarball work with new version (3.0) of hadoop
> ---------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs
> Reporter: Junping Du
> Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main]
> org.apache.hadoop.service.AbstractService: Service
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause:
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
> java.lang.NumberFormatException: For input string: "30s"
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
> at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
> at
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but
> it cannot be recognized by old version MR jars.
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy
> warnings).
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]