[
https://issues.apache.org/jira/browse/HDFS-2602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron T. Myers updated HDFS-2602:
---------------------------------
Attachment: HADOOP-7896-HDFS-1623.patch
Thanks again for the review, Eli. I agree that that synchronization is
unnecessary. Here's an updated patch which removes that sync.
I'm going to commit this momentarily unless there are further objections.
> Standby needs to maintain BlockInfo while following edits
> ---------------------------------------------------------
>
> Key: HDFS-2602
> URL: https://issues.apache.org/jira/browse/HDFS-2602
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ha
> Affects Versions: HA branch (HDFS-1623)
> Reporter: Todd Lipcon
> Assignee: Aaron T. Myers
> Priority: Critical
> Attachments: HADOOP-7896-HDFS-1623.patch, HDFS-2602.patch,
> HDFS-2602.patch
>
>
> As described in HDFS-1975:
> When we close a file, or add another block to a file, we write OP_CLOSE or
> OP_ADD in the txn log. FSEditLogLoader, when it sees these types of
> transactions, creates new BlockInfo objects for all of the blocks listed in
> the transaction. These new BlockInfos have no block locations associated. So,
> when we close a file, the SBNN loses its block locations info for that file
> and is no longer "hot".
> I have an ugly hack which copies over the old BlockInfos from the existing
> INode, but I'm not convinced it's the right way. It might be cleaner to add
> new opcode types like OP_ADD_ADDITIONAL_BLOCK, and actually treat OP_CLOSE as
> just a finalization of INodeFileUnderConstruction to INodeFile, rather than
> replacing block info at all.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira