[
https://issues.apache.org/jira/browse/HDFS-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087393#comment-16087393
]
Kihwal Lee commented on HDFS-12120:
-----------------------------------
It sounds fine conceptually
- The variable length block feature is new and has not been fully field tested.
Making an existing popular feature depend on it has a risk. We need to think
about how the risk can be mitigated. E.g. provide a way to out-out in case it
creates issues.
- Need to think about compatibility issues. (interoperability between new/old
servers and clients) Any limitations should be mentioned in the release note.
- Using blockID for cutoff may not be reliable. Old clusters can still have old
blocks with old style IDs. Gen stamp might work better.
> Use new block for pre-RollingUpgrade files' append requests
> -----------------------------------------------------------
>
> Key: HDFS-12120
> URL: https://issues.apache.org/jira/browse/HDFS-12120
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Vinayakumar B
> Assignee: Vinayakumar B
> Attachments: HDFS-12120-01.patch
>
>
> After the RollingUpgrade prepare, append on pre-RU files will re-open the
> same last block and makes changes to it (appending extra data, changing
> genstamp etc).
> These changes to the block will not be tracked in Datanodes (either in trash
> or via hardlinks)
> This creates problem if RollingUpgrade.Rollback is called.
> Since block state and size both changed, after rollback block will be marked
> corrupted.
> To avoid this, first time append on pre-RU files can be forced to write to
> new block itself.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]