[ 
https://issues.apache.org/jira/browse/HADOOP-2330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12624427#action_12624427
 ] 

dhruba borthakur commented on HADOOP-2330:
------------------------------------------

We should not assume that when a file is preallocated by seeking/writing to a 
size beyond the current file size that the intermediate data will be filled 
with zeros. This portion of the file is actually unspecified.

http://java.sun.com/j2se/1.4.2/docs/api/java/nio/channels/FileChannel.html#position(long)

So, the trick you suggest of setting INVALID_OP to zero might not work 
correctly.

> Preallocate transaction log to improve namenode transaction logging 
> performance
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-2330
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2330
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>         Attachments: preallocateTransactionLog.patch, 
> preallocateTransactionLog.patch, preallocateTransactionLog2.patch, 
> preallocateTransactionLog3.patch
>
>
> In the current implementation, the transaction log is opened in "append" mode 
> and every new transaction is written to the end of the log. This means that 
> new blocks get allocated to the edits file frequently.
> It is worth measuring the performance improvement when big chunks of the 
> transaction log are allocated up front. Adding new transactions do not cause 
> frequent block allocations for the edits log.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to