[
https://issues.apache.org/jira/browse/HADOOP-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714168#action_12714168
]
stack commented on HADOOP-4379:
-------------------------------
In this test run, the append never succeeds.... or at least, after 25 minutes
it still has not successfully done the append open. We try the append, fail
with an AlreadyBeingCreatedException, sleep a second, and then cycle. Usually
it takes well under a minute to successfully open-to-append. Namenode log is
here: www.duboce.net:~stack/wontstop_namenode.log.gz. In this case, I killed
datanode and the hbase regionserver simulating a machine falling off the
cluster (Previous, I was mostly just killing the server process and not the
datanode).
> In HDFS, sync() not yet guarantees data available to the new readers
> --------------------------------------------------------------------
>
> Key: HADOOP-4379
> URL: https://issues.apache.org/jira/browse/HADOOP-4379
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: dhruba borthakur
> Priority: Blocker
> Fix For: 0.19.2
>
> Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt,
> fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch,
> fsyncConcurrentReaders5.txt, fsyncConcurrentReaders6.patch,
> fsyncConcurrentReaders9.patch, hypertable-namenode.log.gz, namenode.log,
> namenode.log, Reader.java, Reader.java, reopen_test.sh, ReopenProblem.java,
> Writer.java, Writer.java
>
>
> In the append design doc
> (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it
> says
> * A reader is guaranteed to be able to read data that was 'flushed' before
> the reader opened the file
> However, this feature is not yet implemented. Note that the operation
> 'flushed' is now called "sync".
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.