[
https://issues.apache.org/jira/browse/HDFS-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rushabh S Shah updated HDFS-7788:
---------------------------------
Attachment: HDFS-7788-binary.patch
I tweaked the source code to let me create a file having zero blockSize.
Then I saved that image and added that image to the resources folder.
While reading the image, if the namenode encounters any inode with
preferredBlockSize == 0 (which was possible pre 2.1.0.beta as Kihwal mentioned
in the jira) , it will change to LongBitFormat.MIN so that namenode can read
that image without any error.
I have added a tarball which contains the image.
But whenever I tried to apply the patch on my machine, it always copy into
/hadoop-hdfs/src/test/resources/
instead of hadoop-hdfs-project/hadoop-hdfs/src/test/resources/
> Post-2.6 namenode may not start up with an image containing inodes created
> with an old release.
> -----------------------------------------------------------------------------------------------
>
> Key: HDFS-7788
> URL: https://issues.apache.org/jira/browse/HDFS-7788
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Kihwal Lee
> Assignee: Rushabh S Shah
> Priority: Blocker
> Attachments: HDFS-7788-binary.patch
>
>
> Before HDFS-4305, which was fixed in 2.1.0-beta, clients could specify
> arbitrarily small preferred block size for a file including 0. This was
> normally done by faulty clients or failed creates, but it was possible.
> Until 2.5, reading a fsimage containing inodes with 0 byte preferred block
> size was allowed. So if a fsimage contained such an inode, the namenode would
> come up fine. In 2.6, the preferred block size is required be > 0. Because
> of this change, the image that worked with 2.5 may not work with 2.6.
> If a cluster ran a version of hadoop earlier than 2.1.0-beta before, it is
> under this risk even if it worked fine with 2.5.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)