[ 
https://issues.apache.org/jira/browse/HDFS-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5259:
-----------------------------

    Attachment: HDFS-5259.000.patch

Upload a patch to address the jumbo write for append. Basically NFS gateway 
tries to identify the special access pattern and modifies the write to an real 
append write. 
The observed access pattern is, after reopen a file to append, if the previous 
written data is still in client Kernel buffer cache, the client might combine 
the previous written data with newly appended data for one WRITE NFS call for 
the first write.  Inside NFS gateway, it checks if this is the only write, if 
yes, it drops the overlapped section and appends the new data section.

> Support client which combines appended data with old data before sends it to 
> NFS server
> ---------------------------------------------------------------------------------------
>
>                 Key: HDFS-5259
>                 URL: https://issues.apache.org/jira/browse/HDFS-5259
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: nfs
>            Reporter: Yesha Vora
>            Assignee: Brandon Li
>         Attachments: HDFS-5259.000.patch
>
>
> The append does not work with some Linux client. The Client gets 
> "Input/output Error" when it tries to append. And NFS server considers it as 
> random write and fails the request.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to