[
https://issues.apache.org/jira/browse/HDFS-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140958#comment-14140958
]
Vinayakumar B commented on HDFS-6633:
-------------------------------------
Hi [~szetszwo],
Patch proposes a blocking read if the file is not closed and it calls NN and DN
RPCs in a loop with short sleep of 100ms.
As per discussion in mailing list, I think, its better to provide a separate
API to poll for new data on such open files on demand from client instead of
blocking all client's reads.
normal read() before can throw EOF once all bytes read till initial known
length, but for such open files specific clients can poll for new data and
continue reading once data available.
Any thoughts?
> Support reading new data in a being written file until the file is closed
> -------------------------------------------------------------------------
>
> Key: HDFS-6633
> URL: https://issues.apache.org/jira/browse/HDFS-6633
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: hdfs-client
> Reporter: Tsz Wo Nicholas Sze
> Assignee: Tsz Wo Nicholas Sze
> Attachments: h6633_20140707.patch, h6633_20140708.patch
>
>
> When a file is being written, the file length keeps increasing. If the file
> is opened for read, the reader first gets the file length and then read only
> up to that length. The reader will not be able to read the new data written
> afterward.
> We propose adding a new feature so that readers will be able to read all the
> data until the writer closes the file.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)