[ 
https://issues.apache.org/jira/browse/HADOOP-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12668186#action_12668186
 ] 

Doug Judd commented on HADOOP-4379:
-----------------------------------

Ok, I've uploaded a reproducible test case (see attachments ReopenProblem.java 
and reopen_test.sh).  This test consistently hangs on both my local mac HDFS 
installation as well as our 10 node Linux HDFS cluster.  It's not as simple as 
I suggested in my previous comment as you can see by the code.  Here's the 
output I see when I run it:

[d...@motherlode000 0.9.2.1]$ ./reopen_test.sh 
Deleted hdfs://motherlode000:9000/hypertable/servers
rmr: cannot remove /hypertable/tables: No such file or directory.
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
length("/hypertable/servers/10.0.30.114_38060/log/root/0") = 163, obtained in 
65258 milliseconds
length("/hypertable/servers/10.0.30.114_38060/log/root/0") = 163, obtained in 
113 milliseconds
Read 163 bytes from root fragment 0
Read 0 bytes from root fragment 0
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
Waiting for lease recovery ...
[...]


> In HDFS, sync() not yet guarantees data available to the new readers
> --------------------------------------------------------------------
>
>                 Key: HADOOP-4379
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4379
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: dhruba borthakur
>             Fix For: 0.19.1
>
>         Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt, 
> fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch, 
> hypertable-namenode.log.gz, Reader.java, Reader.java, reopen_test.sh, 
> ReopenProblem.java, Writer.java, Writer.java
>
>
> In the append design doc 
> (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it 
> says
> * A reader is guaranteed to be able to read data that was 'flushed' before 
> the reader opened the file
> However, this feature is not yet implemented.  Note that the operation 
> 'flushed' is now called "sync".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to