[ 
https://issues.apache.org/jira/browse/HDFS-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13089840#comment-13089840
 ] 

Konstantin Shvachko commented on HDFS-1108:
-------------------------------------------

Yes, two proposals. So let me ask you the same question I've been asking 
Suresh. Which proposal are you implementing?
1) RPC based (BackupNode)
2) shared storage based (AvatarNode-like design)

As I understood Suresh is trying to build a universal HA framework applicable 
to both. Your patch is intended for proposal 2 only and therefore contradicts 
his effort, if I am not missing anything.
Is there a community decision on where we are going? Do we need to have a vote 
on this?

I consider the performance overhead for syncing on each addBlock() as a 
disadvantage of shared storage based approach.

I think this issue should be blocked until the HA approach dilemma is resolved, 
as it attempts to implicitly move development in one particular direction.

> Log newly allocated blocks
> --------------------------
>
>                 Key: HDFS-1108
>                 URL: https://issues.apache.org/jira/browse/HDFS-1108
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: name-node
>            Reporter: dhruba borthakur
>            Assignee: Todd Lipcon
>             Fix For: HA branch (HDFS-1623)
>
>         Attachments: HDFS-1108.patch, hdfs-1108-habranch.txt, hdfs-1108.txt
>
>
> The current HDFS design says that newly allocated blocks for a file are not 
> persisted in the NN transaction log when the block is allocated. Instead, a 
> hflush() or a close() on the file persists the blocks into the transaction 
> log. It would be nice if we can immediately persist newly allocated blocks 
> (as soon as they are allocated) for specific files.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to