[ 
https://issues.apache.org/jira/browse/FLUME-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Shreedharan updated FLUME-2357:
------------------------------------

    Attachment: FLUME-2357.patch

Much of the size of the patch is due to a couple of file renames. Otherwise the 
patch itself is pretty simple. In the Bucketwriter, if a close fails, we simply 
reschedule the close to happen sometime later until it finally succeeds or till 
we hit a maximum count. Added a test case too. This depends on the presence of 
the isFileClosed method in the HDFS client API. If the method is absent, 
reattempts are not done.

> HDFS sink should retry closing files that previously had close errors
> ---------------------------------------------------------------------
>
>                 Key: FLUME-2357
>                 URL: https://issues.apache.org/jira/browse/FLUME-2357
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.4.0
>            Reporter: Patrick Dvorak
>         Attachments: FLUME-2357.patch
>
>
> When the AbstractHDFSWriter fails to close a file (due to exceeding the 
> callTimeout or other hdfs issues), it will leave the the file open and never 
> try again.  The only way to close the open files is to restart the flume 
> agent.  There should be a  configurable option to allow the sink to retry to 
> close files that had previously failed to close.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to