[ 
https://issues.apache.org/jira/browse/HDFS-9164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constantine Peresypkin updated HDFS-9164:
-----------------------------------------
    Assignee: Constantine Peresypkin
      Status: Patch Available  (was: Open)

> hdfs-nfs connector fails on O_TRUNC
> -----------------------------------
>
>                 Key: HDFS-9164
>                 URL: https://issues.apache.org/jira/browse/HDFS-9164
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: HDFS
>            Reporter: Constantine Peresypkin
>            Assignee: Constantine Peresypkin
>         Attachments: HDFS-9164.1.patch
>
>
> Linux NFS client will issue `open(.. O_TRUNC); write()` when overwriting a 
> file that's in nfs client cache (to not evict the inode, probably). Which 
> will spectacularly fail on hdfs-nfs with I/O error.
> Example:
> $ cp /some/file /to/hdfs/mount/
> $ cp /some/file /to/hdfs/mount/
> I/O error
> The first write will pass if the file is not in cache, the second one will 
> always fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to