[
https://issues.apache.org/jira/browse/HDFS-12918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288537#comment-16288537
]
Zach Amsden commented on HDFS-12918:
------------------------------------
Maybe that is the real bug then. I got this exception when upgrading an
existing HDFS cluster - reformatting was required:
{noformat}
Failed to load image from
FSImageFile(file=/data/2/dfs/nn/current/fsimage_0000000000008728887,
cpktTxId=0000000000008728887)
java.lang.IllegalArgumentException: Missing state field in ErasureCodingPolicy
proto
at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convertErasureCodingPolicyInfo(PBHelperClient.java:2973)
at
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadErasureCodingSection(FSImageFormatProtobuf.java:386)
at
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:298)
at
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:188)
at
org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:928)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:912)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:785)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:719)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:317)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1072)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:704)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:665)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:727)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:950)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:929)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
{noformat}
> EC Policy defaults incorrectly to enabled in protobufs
> ------------------------------------------------------
>
> Key: HDFS-12918
> URL: https://issues.apache.org/jira/browse/HDFS-12918
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Zach Amsden
> Assignee: Manoj Govindassamy
> Priority: Critical
>
> According to documentation and code comments, the default setting for erasure
> coding policy is disabled:
> /** Policy is disabled. It's policy default state. */
> DISABLED(1),
> However, HDFS-12258 appears to have incorrectly set the policy state in the
> protobuf to enabled:
> {code:java}
> message ErasureCodingPolicyProto {
> ooptional string name = 1;
> optional ECSchemaProto schema = 2;
> optional uint32 cellSize = 3;
> required uint32 id = 4; // Actually a byte - only 8 bits used
> + optional ErasureCodingPolicyState state = 5 [default = ENABLED];
> }
> {code}
> This means the parameter can't actually be optional, it must always be
> included, and existing serialized data without this optional field will be
> incorrectly interpreted as having erasure coding enabled.
> This unnecessarily breaks compatibility and will require existing HDFS
> installations that store metadata in protobufs to require reformatting.
> It looks like a simple mistake that was overlooked in code review.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]